0

How "meaningful" is Turing test?

Because even though it's sometimes presented as a "scientific" way of classifying the goodness of AI, I do view it as a sort of naive, left-handed way for defining or measuring computer intelligence.

Because there are many more aspects to computer intelligence than mere "whether it can trick some people into believing that they talk with a human". Sure if the computer can answer any question similarly to how a humans expert would, then that would be pretty "hard", but computer intelligence is not there yet and it's unsure, whether it'll ever be due to the long-known differences between formal and informal semantics. It could be just a problem of inhuman system design capability. That is, no-one can ever in practice develop a computer program that's that good. It would just be too large.

Nowadays intelligence of computer programs could be measured by e.g. quantities of statements that they can evalute. That is, just the number of possibilities that they're able to process. Because even the goal of the Turing test would be a computer program with large enough database and large enough semantic analyzer and reasoner.

So, are there better measures?

mavavilj
  • 579
  • 7
  • 23

1 Answers1

5

Nowadays intelligence of computer programs could be measured by e.g. quantities of statements that they can evaluate.

So, the trick, is that the Turing Test was meant for an evaluation of strong AI: could there be a computer program that was as intelligent as a human? So evaluating statements isn't enough, since humans can do much more than that.

There are some things to keep in mind. First, while some people use the Turing Test as a metric of the quality of chatbots today, that wasn't necessarily its original intent. It was more of a thought experiment, trying to answer the question of "what does it mean for an AI to be intelligent?"

Secondly, the test wasn't mean to strictly evaluate the "question and answer" ability of an AI. It was more saying, "if there's an AI so good we can't distinguish it from a human, is there any meaningful difference?" It was a way to define sentience without bringing in complex philosophical (or possibly even religious) arguments about what is and isn't alive. The Turing Test says "if it looks sentient, we'll say it is, because there's no meaningful way to tell otherwise".

That is, no-one can ever in practice develop a computer program that's that good. It would just be too large.

There are many possible ways around this. One is that we might not write every aspect of a program, but evolve one. We could write a program that writes a program that writes a program etc, each one being slightly more intelligent until it converges on something powerful.

Or, an AI could be developed from machine learning. With enough time and data, something close to human intelligence could grow from statistical observations, having the headstart of hand-coded AI at its core.

If a human brain is enough for human intelligence, a comparably complex program should be as well. But even if no such program is ever developed, the Turing Test still holds meaning as a thought experiment.

Joey Eremondi
  • 30,277
  • 5
  • 67
  • 122