In a normal CAPTCHA scenario, a computer creates a challenge and a correct response to it. (The challenge is an image of distorted text, for instance, and the response is the characters depicted.) These are constructed so that hopefully a human can easily determine the response from the challenge, but a computer cannot. Ordinarily only the originating computer, which already knows the correct response, can determine whether a given response was correct.
My question: is it possible to create a CAPTCHA that computers can verify, but not solve? In other words, is it possible to create a scenario that looks like this:
Computer A generates a challenge, sends it to Human B and Computer C
Computer C cannot feasibly determine the correct response, other than by brute-force search (which can hopefully be made impractical by the size of the response.)
Human B can feasibly determine the correct response. ("Feasibly" is not easy to define. We need the response to be much bigger than normal CAPTCHAs in order to deter simple brute-force searching by computers, so "as easy as a normal CAPTCHA" is going to be impossible. But it's not out of the realm of possibility to make a human type in a large response. It'd just be really annoying. A few hundred printable characters can get to 2048 bits, for instance. So as a rough definition: it's allowed it to be annoying as hell, but it has to at least be feasible for a person. Assume the person's physical and mental abilities are roughly average for a literate computer user.)
If Human B sends Computer C the correct response, Computer C can feasibly verify that the response is correct, without contacting computer A or having any prior knowledge other than the challenge.
Would it be possible, under today's technology and knowledge, to create such a protocol?
Furthermore, would it be possible to design the protocol such that even Computer A can't determine the correct response to the challenge it just generated without human input--and Computer C can verify that Computer A could not have known the response without a human providing the solution? If this all this can be achieved, then the result is analogous to the "proof of work" seen in bitcoin and other protocols. Instead of a proof of computational work, we'd have a proof of human work--the combination of the challenge and correct response would constitute proof that a human spent some marginal amount of effort. This proof would then be verifiable by any computer. Is such a thing even remotely plausible?
Of course, at some point in the future computers will probably overtake humans altogether. Once that happens, there will not be any problems that are feasible for a human but not a computer, and therefore no CAPTCHAs or CAPTCHA-like protocols will be possible.
But what about under currently known human and computer abilities?