65

We know the halting problem (on Turing Machines) is undecidable for Turing Machines. Is there some research into how well the human mind can deal with this problem, possibly aided by Turing Machines or general purpose computers?

Note: Obviously, in the strictest sense, you can always say no, because there are Turing Machines so large they couldn't even be read in the life span of a single human. But this is a nonsensical restriction that doesn't contribute to the actual question. So to make things even, we'd have to assume humans with an arbitrary life span.

So we could ask: Given a Turing Machine T represented in any suitable fashion, an arbitrarily long-lived human H and an arbitrary amount of buffer (i.e. paper + pens), can H decide whether T halts on the empty word?


Corollary: If the answer is yes, wouldn't this also settle if any computer has a chance of passing the turing-test?

bitmask
  • 1,765
  • 2
  • 16
  • 20

11 Answers11

32

It is very hard to define a human mind with a such mathematical rigor as it is possible to define a Turing machine. We still do not have a working model of a mouse brain however we have the hardware capable of simulating it. A mouse has around 4 million neurons in the cerebral cortex. A human being has 80-120 billion neurons (19-23 billion neocortical). Thus, you can imagine how much more research will need to be conducted in order to get a working model of a human mind.

You could argue that we only need to do top-down approach and do not need to understand individual workings of every neuron. In that case you might study some non-monotonic logic, abductive reasoning, decision theory, etc. When the new theories come, more exceptions and paradoxes occur. And it seems we are nowhere close to a working model of a human mind.

After taking propositional and then predicate calculus I asked my logic professor:
"Is there any logic that can define the whole set of human language?"
He said:
"How would you define the following?
To see a World in a grain of sand
And a Heaven in a wild flower,
Hold Infinity in the palm of your hand
And Eternity in an hour.
If you can do it, you will become famous."

There have been debates that a human mind might be equivalent to a Turing machine. However, a more interesting result would be for a human mind not to be Turing-equivalent, that it would give a rise to a definition of an algorithm that is not possibly computable by a Turing machine. Then the Church's thesis would not hold and there could possibly be a general algorithm that could solve a halting problem.

Until we understand more, you might find some insights in a branch of philosophy. However, no answer to your question is generally accepted.

http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems#Minds_and_machines http://en.wikipedia.org/wiki/Mechanism_(philosophy)#G.C3.B6delian_arguments

Dávid Natingga
  • 603
  • 4
  • 8
17

I think there is no way how give a definitive answer to this question, as nobody really knows the capabilities of human mind (and I doubt anyone ever will).

But there is a view that gives one possible solution or explanation to this question:

When we're searching an oracle to solve the halting problem (or decide provability of first-order logical formulas etc.), we naturally want the oracle to be correct, it must not make any mistakes. But human mind isn't consistent, it makes mistakes. Nobody can honestly say that all statements (s)he believes are true are really true. This inconsistency can be viewed as the source of the power human mind has. Due to its inconsistency, it isn't subject of limitations that follow from the halting problem, Gödel's incompleteness theorem etc. We make mistakes, we mistakenly believe in false statements, and as our knowledge grows, we correct them (and of course find new false statements we believe in). On the other hand, we want all formalizations of the notion of algorithm or all logical calculi to be consistent, so that we can prove once and for all that they're free of such mistakes. And this makes them limited.

Petr
  • 2,265
  • 1
  • 17
  • 21
10

Just to make things clear: The Church-Turing hypothesis has nothing to do with some dogma of a hypothetical Church of Turing. There is nothing religious about it. On the contrary, it is just a hypothesis summarizing the best of our knowledge. There is no metaphysical Implication. The question whether humans could do better, that they could achieve more than machines, is a metaphysical question as we have strictly no handle on it, no hint whatsoever of what could differentiate a human from a machine. So this question should be migrated to metaphysics.stackexchange.com.

But let us assume that the human brain can solve the halting problem for Turing Machine. Then the computational model of Turing Machines becomes much less important, and the Church-Turing Hypothesis becomes much less relevant, as we have a more powerful model called the Human Model (to avoid the word machine). Of course this (arbitrarily long-lived) human model comes with its own hypothesis on computability.

But then, while the halting problem for Turing Machines is no longer critical, we now have to deal with the Human Model Halting problem. And diagonalization will show that the Human Model Halting problem is not decidable by a Human. Then what?

Now, you might object that diagonalization would not be applicable. That would mean, I guess, that associating some form of Gödel numbering with computing devices, proofs, or whatever we describe with notation would no longer be possible, though it is currently the basis of all science. In other words, we would have to deal with entities, concepts that have no written representation, that cannot have a written representation, or to say it more generally concepts without a syntactic representation, whether written, oral or otherwise.

Of course, this would be in opposition with the teaching of John whose very first sentence is: "In the beginning was the Word, and the Word was with God, and the Word was God." Negating the fundamental importance of syntax, of the word, is thus a very anti-christian statement. I am of course not taking a stand on this, but since my first take on this question is that it is a metaphysical one, and since the question is not on hold, it seems natural to consider all consequences, including the metaphysical consequences.

babou
  • 19,645
  • 43
  • 77
9

Consider this from a different perspective.

  • First-order logic is undecidable, that is, there is no decision procedure that determines whether arbitrary formulas are logically valid. (But the set of true first-order formulas is semi-decidable, that is if a formula is true, it's possible to find a proof by an algorithm.)
  • Proof assistants help prove theorems in first-order (or even higher-order) logic. The proof assistant ensures that the proof is done correctly and can even help resolve some cases. However, human interaction is require to guide the proof assistant to the correct answer.

Proof assistants could be used to prove properties of individual Turing machines.

Petr
  • 2,265
  • 1
  • 17
  • 21
Dave Clarke
  • 20,345
  • 4
  • 70
  • 114
3

Carl Mummert's comment nailed it.

  1. My understanding (correct me if I am wrong) of the Church-Turing thesis is the idea that anything that can be computed can be computed by a Turing Machine.

  2. And also, if a Turing Machine could compute if another Turing Machine would halt or not on an input (halting problem), then you could also compute if another Turing Machine would not halt on a given input (just swap yes for no, and no for yes!) - significant because then you could feed this Turing Machine to itself - would it not halt on itself on the input? If yes (not halting), then no (is halting??). If no, then yes. If yes, then no. If no, then ye... hmmm.

So, 2. shows it is impossible for a Turing Machine to solve the halting problem. But I don't think there is any clear evidence to contradict 1. at this time. Every model of computation known still can solve (decide) as much as a Turing Machine can.

The burden of proof seems to be on the person coming up with a new model of computation, which has more power (that is, can decide more problems) than the classical Turing Machine.

By the way, some great lectures on this can be found here.

Bingo
  • 291
  • 2
  • 6
3

There isn't any evidence that the human brain is in fact anything more than a Turing machine. In fact, it seems like the entire universe can be simulated on a (sufficiently large) Turing machine.

Humans are "smart" because of smart algorithms that are cleverly written in neurons so computer scientists can't steal or efficiently implement them. However clever these algorithms are, they most likely cannot reliably solve the halting problem.

Paresh
  • 3,368
  • 1
  • 21
  • 33
ithisa
  • 367
  • 2
  • 9
2

as with DCs answer (and to expand on it somewhat) there is a strong sense in which this question (combination of human and computer in finding special-case solutions to the halting problem) is related to the field of ATP, automated theorem proving and the closely related computer assisted proofs. also it has long been known there is a strong correspondence between programs and proofs in the Curry-Howard correspondence. also related/similar to this is proving program termination (eg via loop invariants or loop variants). in fact there is a deep sense in which all of mathematics is about this problem, because virtually all mathematical statements can be converted to questions about specific programs on TMs halting or not halting. see eg [2] for some further info & lots of further refs on ATP etc.

[1] is a semifamous book on the subject that examines the question in detail, relating it to the possibility artificial intelligence. briefly Penrose's idea is that true AI must be impossible because humans can come up with proofs of undecidability such as Turings halting problem or Godels incompleteness proof, whereas computers could not due to the same phenomena.

[1] Emperors new mind by Penrose

[2] adventures & commotions in ATM, vzn

vzn
  • 11,162
  • 1
  • 28
  • 52
2

In short: NO

there are Turing machines for wich we don (yet) know if those machines Halt (Collatz Conjecture in example).

Until we find a way to enumerate all Turing Machines for wich we don't have a Halting-proof, and until we don't find a way to proove Halt-ness of those machines we are not any better than a Turing machine (If I am correct someone already prooven that we cannot prove everything, a points toward the fact we are as limited as Turing Machines). Oh wait, we cannot enumerate all those machines because infact we have a limited memory and a limited lifespan.

However you question, is self-answering:

You are asking if human are able to "decide", but the decision itself is defined as an algorithm, so or we run an algorithm on our minds and comes to a correct conclusion (or to no conclusion at all: open problems), or we just make a guess.

Computation theory is about:

  • Assume there exist a black box algorithm (Oracle) than can answer yes or no to certain questions
  • You can then use it to answer unanswerable questions by building another algorithm that use it
  • By doing that you ends with a contradiction

That means that as long you have any system that want a No or Yes answer, the Oracle is not compatible with that system, so Oracles may actually exists, but we have no way to communicate their results, because if we are able to communicate their results then we ends up with a contradiction somewhere.

Assume Quantum mechanics is made of many small oracles, then you cannot communicate their results because when you read the status of a particle, you also change the status of that particle.

I had the answer, but I've read it..

Infact we can proove anything if we start from fake hypotesis. So we can proove that an algorithm halt, but we can also proove that an algorithm does not halt, that can be interesting, but it is useless since a contradictory (you want a Yes or No answer) result is not what you want.

CoffeDeveloper
  • 314
  • 1
  • 9
0

Short Answer

Basically no in the general case (at least, not without guessing). Though, certain subproblems are indeed solvable.

Long Answer

In order for this to happen, you would have to assume that a human has the ability to solve mathematical problems that a TM can't in some algorithmic way. This would imply that the Church-Turing thesis is false, even though, TMK, it has been shown to be true for every known computational method discovered so far (except maybe probabilistic/quantum computation, which no longer meets the firm criteria for 'algorithm' since it involves inexact results).

However, there are indeed solvable subsets of the halting problem, such as determining whether or not a TM will halt within a certain number of computational steps (think 'ticks') upon a given input, which would be reasonably doable for both a TM and a human.

Though, considering how poor the general populus is at performing any sort of mathematics, you might get your answer of 'no' in a much different way.

Epilouge

You should look up a proof of the halting problem's unsolvability. There is at least 1 exceedingly simple proof that gives an example of a machine who's halting state cannot be properly predicted given the machine itself as an input, which serves as a counterexample to the very existence of an arbitrary machine that could solve the halting problem in the general case (for TM's).

Mr. Minty Fresh
  • 592
  • 1
  • 5
  • 11
-1

Modern supercomputer systems can certainly simulate the behavior of at least one atom. If individual atoms can be simulated then one can simulate the human mind as well by building a big enough computer system for the simulation of the individual atoms. However I think that this alone wouldn't be enough. You would also need an entropy source in order to obtain true random numbers for the simulation of the human mind. The best entropy source would probably be radioactive decay or something like that. What does this mean?

I think that the human mind is more powerful than a Turing Machine, because a TM is deterministic. You cannot simulate true randomness on a Turing Machine. (At least this is the impression, I got from the following discussion

https://cstheory.stackexchange.com/questions/1263/truly-random-number-generator-turing-computable

) However I think that a Turing Machine, attached to a true entropy source would be capable of simulating a human mind.

If one also takes the randomness of the environment into account, which interacts with a human mind (e.g. the food, we eat, how the sleep, walk, basically live our lives), then I certainly think that a TM with entropy is needed for the simulation of the human mind. Don't forget that the human mind is also constantly exposed to background radiation, which may also unpredictably interact with the molecules in our brain. But I think that even if we consider a completely "isolated" environment (Is that even possible? Because the following seems to indicate that it may not be possible: http://hps.org/publicinformation/ate/faqs/faqradbods.html ) - basically a "brain in the jar" - scenario, you would probably still get truely random processes, which would occur somethere in the human brain. I'm sure that a biologist could settle this part of the question? Also don't forget that a human is in a sense also part of his or her environment:

http://en.wikipedia.org/wiki/Human_Microbiome_Project

Perhaps some of these bacteria also influence the inner workings of the human brain in some way and the composition of this bacteria can change in a human's lifetime (also within certain boundaries I suppose?). The question is whether the behavior of these bacteria is random within certain boundaries. If at least one process within at least one of these organisms is truely random and also somehow indirectly affects the human brain then one would need a TM with an entropy source to simulate a human mind.

So to answer the original question:

Can a "human" (as defined in the question) solve the halting problem? Yes, if it is the halting problem for all deterministic TMs and no if it is for all TMs, attached to an entropy source.

-2

All human thought conflates single problems into personal experience. We might satisfy ourselves that we have adequately solved a problem sufficiently to halt, but we never know for sure in the algorithmic sense a computer would acquire a solution. Be still and watch your own mind. 99.9% of the messaging going on in our neural circuitry has nothing to do with a logical representation of the world. Instead, we are dealing with "gut" feelings, sensory data and a flood of memories, associations and attitudes which vary constantly. That's why we have scientific method.

Steve
  • 11