32

Alan Turing proposed a model for a machine (the Turing Machine, TM) which computes (numbers, functions, etc.) and proved the Halting Theorem.

A TM is an abstract concept of a machine (or engine if you like). The Halting Theorem is an impossibility result. A Carnot Engine (CE) is an abstract concept of a heat engine and Carnot proved the Carnot Theorem, another impossibility result related to thermodynamic entropy.

Given that a TM is physically realizable (at least as much as a CE, or maybe not?) is there a mapping or representation or "isomorphism" of TM or CE which could allow to unify these results and in addition connect to entropy?

There are of course formulations of TM and the Halting Theorem in terms of algorithmic information theory (e.g. Chaitin, Kolmogorov etc.) and entropy (in that context). The question asks for the more physical concept of entropy (if in the process of a potential answer algorithmic entropy arises it is fine, but it is not what the question asks exactly).

One can also check Is quantum uncertainty principle related to thermodynamics? (Physics.SE) which relates quantum uncertainty with the 2nd law of thermodynamics.

Related links (updated as new interesting studies are found):

  1. Topological Entropy and Algebraic Entropy for group endomorphisms, Dikran Dikranjan, Anna Giordano Bruno 2013
  2. The Physics of Maxwell's demon and information, Koji Maruyama, Franco Nori, Vlatko Vedral 2008
  3. Stochastic thermodynamics of computation, David H. Wolpert 2023
  4. Universal validity of the second law of information thermodynamics, Shintaro Minagawa, M. Hamed Mohammady, Kenta Sakai, Kohtaro Kato, Francesco Buscemi 2023
Nikos M.
  • 1,016
  • 7
  • 16

7 Answers7

13

I am not at all an expert in this area, but I believe you will be interested in reversible computing. This involves, among other things, the study of the relationship between processes that are physically reversible and processes that are logically reversible. I think it would be fair to say that the "founders" of the field were/are Ralph Landauer and Charles H Bennett (both of IBM research, I think.)

It touches on quantum computing and quantum information theory, but also examines questions like "what are the limits of computation in terms of time, space and energy?" It is known, (if I remember correctly) that you can make the energy required to perform a reversible calculation arbitrarily small by making it take an arbitrarily long time. That is, energy $\times$ time (=action) required to perform a reversible computation can be made a constant. This is not the case for non-reversible computations.

Many of the people studying in this area are also working on quantum computing and digitial physics (the idea that the universe is a big quantum cellular automata). The researchers names that come to mind are Ed Fredkin, Tommaso Toffoli and Norm Margolus.

These questions are absolutely on topic for computer science. Not just for the theory (which includes cool math as well as cool physics) but for engineers who want to know the ultimate limits of computation. Is there a minimum volume or energy required to store a bit of information? The action required to perform a reversible computation may be constant, but are there limits on what that constant is? These are critical knowledge for engineers trying to push the boundaries of what is possible.

Wandering Logic
  • 17,863
  • 1
  • 46
  • 87
6

I'm not familar with Carnot's Theorem, except what I've just read in Wikipedia, but even from that cursory introduction, there is a connection in the structure of the proofs, and that may be interesting to you, as it's a proof technique that is applicable in many domains.

They're both proofs by contradiction in which to show that no thing in a given class has some property, you suppose that some instance actually does have that property, and then show that a contradiction follows.

The Halting Problem is interesting in that the contradiction arises from some self-interaction concerning the particular instance (which is a machine M that can determine whether an arbitrary machine will halt with a given input). In particular, you construct a new machine that includes M as a component, and then feed the new machine to M.

Someone with more knowledge about Carnot's Theorem could elaborate on it (which I'm not qualified to do), but it appears that the contradiction arises from the type of heat engine that you could build if you had an instance with the property at hand.

So both cases involve the construction of:

  • Suppose some X has property P.
    • From X, build related Y.
    • The relationships between X and Y are contradictory.
  • Therefore, no X has property P.

There does appear to be a difference, though, in that the contradiction in the Halting Theorem case is a pure logical contradiction, and would be contradictory in any setting of classical logic. The Carnot Theorem, as I understand it, is only contradictory with respect to the second law of thermodynamics. From a logical perspective, that's an axiom, so if you took a different axiomatization in which the second law of thermodynamics didn't hold, Carnot's Theorem wouldn't be a theorem, because the contradiction wouldn't exist. (What a formalization of thermodynamics would look like without the second law is the sort of question that led geometers to non-Euclidean geometry.)

4

IANAPhysicist but I don't see any connection. Turing machines are objects of pure mathematics and the undecidability of the halting problem is independent of any physical realization of anything.

David Richerby
  • 82,470
  • 26
  • 145
  • 239
2

this diverse multiple-topic question unf does not have a simple/easy answer and touches on active areas of TCS research. however it is a rare question asking about a link between physics & TCS that has interested me over the years. there are a few different directions to go on this. the basic answer is that its an "open question" but with some active/modern research touching on it and hinting at connections.

  • there are some surprising/deep undecidable problems from advanced physics. for example from dynamical systems. however, have not seen this connected to entropy per se, but entropy is associated with all physical systems (eg one can see this in chemistry theory), so there must at least be an indirect link.

  • entropy indeed shows up in CS but more in the form of information theory and coding theory. the birth of coding theory involved the definition/analysis of entropy associated with communication codes by Shannon. try this great online ref Entropy & Information theory by Gray

  • entropy is also associated sometimes associated with measuring randomness in PRNGs. there is a connection of complexity class separations (eg P=?NP) to PRNGs in the famous "Natural Proofs" paper by Razborov/Rudich. there is continuing research on this subj.

  • you mention thermodynamics and its connection to TCS. there is a deep connection between magnetization in spin glasses in physics and NP complete problems studied in the SAT transition point. there (again) the physical system has an entropy associated with it but it has probably been studied more in a physics context than a TCS context.

vzn
  • 11,162
  • 1
  • 28
  • 52
1

There is a simple thought problem that is sometimes used as an introduction to non-conventional computing paradigms:

You have two light bulbs and their respective on-off switches. Someone opens and closes both lights one after the other. How do you determine which one was closed first and which one was closed last? Determine the minimal number of times you will need to open the lights to decide this problem.

Most computer scientists usually try to find some boolean logic-based solution. The answer is (at least one of them): by touching the light bulbs and seeing which one is hotter.

Heat-based paradigms exists in computer science: simulated annealing is an known algorithm (D-waves quantum computer is the quantum counterpart of the algorithm).

Now is there a relation with the Halting problem?

The classic work of Chaitin and Calude on the Halting problem via the concept of Omega numbers can be linked to the probabilistic formulation of the Halting problem. It is the more recent treatise on the problem that I can think of... and no clear relation with entropy (thermodynamic). Now if information entropy (in the sense of Shannon) is good with you, the Omega number encodes in the most succinct way the Halting problem, in the sense of a Shannon bound.

In short, an Omega number is the probability that a random program halts. Knowing the constant would allow the enumeration of all valid mathematical statements (truths, axioms, etc.) and is uncomputable. Calude computed a version of Omega by changing the uniform probability measure with a measure inversely proportional to a random program's length and by using prefix-free encodings.So we could speak of Chaitin's Omega and Calude's Omega.

user13675
  • 1,684
  • 12
  • 19
1

Very captivating question indeed, and we will see that your thinking IS correct.

First let's see what the second principle of thermodynamics says.

The entropy function is used in the 2nd law of thermodynamics. It stems from Carnot's theorem which states that processes taking places in steam machines have an efficiency lower or at best equal to the corresponding "reversible" machine (which by the way seems like an unstable concept over the 150 years of thermodynamics). Carnot did not coin the entropy function himself, but together with Clausius this is what they say:

As there is no perpetuum machine, then we can build a function S called entropy which constrains macroscopic thermodynamic measures into a certain equation, namely that S(V, T, P, etc.) = 0

Note that this equation is nothing but the equation of a hyper-surface in the space of thermodynamic measures.

Enters Carathéodory.

Carathéodory is a German mathematician and like all mathematicians he wants to extract out of Carnot's and Clausius reasoning some axioms which would allow him to clarify what the second law really is about. Put bluntly he wants to purify thermodynamics to know exactly what entropy is.

After listing a certain number of axioms, he manages to formulate HIS second law, which says (more or less):

There ARE some adiabatic processes. Or more prosaically, if you want to return, sometimes work alone is not enough. You need a bit of heat.

Now that seems VERY different from the formulation of Clausius! But in fact it it not. All Carathéodory did was to change the orders of the words, a bit like mathematicians played with Euclide's 5th axiom for 2,000 years and produced many different wording for that axiom. And if you take a step back you should not be too surprised by Carathéodory's statement of the second law. In fact Carathéodory's leads to the exact same entropy function and hyper-surface equation S(V, T, P, etc.) = 0

Think hard on Carnot's theorem. As a mathematician, you should not be too satisfied of the way Carnot's admits perpetuum machines do not exist. In fact, as a mathematician you would rather see something like this:

There is an entropy function S which constrains macroscopic measures IF AND ONLY IF there is no perpetuum machines".

NOW you have a theorem. And what does it say? That as long as there is no isolated mechanical system which produces an infinite amount of energy and hence could lead you to any state you want, then you will find an entropy function. An isolated mechanical system is an adiabatic process. Hence Carathéodory's formulation: no adiabatic system can lead you anywhere. Sometimes you will need some heat.

So not only we are sure that Carathéodory's is correct, but also that his formulation is pretty simple.

Now where do you get the impression that the second law à la Carathéodory is similar to the halting problem?

Take a step back on Carathéodory's statement. All it says is that once you have an isolated mechanical system which you stop mingling with, you cannot reach any state you want.

Doesn't that sound PRECISELY like the halting problem? I.e. once you have written all axioms of your theory and laid down all possible transitions, there will be problems which you cannot solve. Sometimes, you will need to add more axioms.

In fact if you want to go really deep and encode Carathéodory's formulation, this will result in the same code as the halting problem with adiabatic processes instead of Turing machines, and states instead of problems.

What do you think?

NOTE: I edited my answer almost entirely so comments below won't be in line with what it contains now.

Jerome
  • 119
  • 7
0

Yes!, strangely enough I have thought about this.. Here is the idea:

First Step

Model the Maxwell's Demon as a computer program. Then, How does Demon came to know particle's speed and position before opening the door for selection?

Suppose that the demon can't measure the speed at which particles hit the door, why? because that would change particle's speeds, so demon have to figure out before open it, without looking, without measuring. To be fair we will let the demon know the rules of the game in advance, i.e. feed the demon with laws of motion, interactions of particles, and initial conditions, enough of the physics/dynamic model.

Second Step

Now model the gas of particles also as a computer program that is running same code given to the demon for every particle, so the gas is computing a result from its initial conditions, the Demon doesn't know that result until it halt (if ever): namely "a particle with the right speed is at the door", the decision yes/no question we are asking to the system is "Have a particle the right position and enough speed?", if so, the door could be opened and the fast particle can go into the high temperature side of the room setting new initial conditions (will those consecutive problems has an answer? or will run forever?)

There will be a time when there is no particle with enough speed to cross the boundary, so, there will be a time when the code will run forever (do not halt) for almost any given threshold.

Demon wants to know the result that is computed by the gas, but the result is in a sense involved potentially within the source code of the particle's laws plus the initial conditions.. of course we need to run that program to know it. If Demon run the same program waiting for the right speed at the output, the program could halt or it could run forever (but we suppose also that demon has no more computational power than the gas, so it won't be able to decide the door opening on time ).

Daemon could try to figure out the program output (or if it will halt) by watching the source and inputs without running it but it's like trying to solve the Halting Problem, why? because Demon doesn't know what laws and initial condition will be feed so Demon should be prepared to solve for any set of law and initial conditions, and we know it's not possible in general, it will need an oracle, if it could it will be enough to build a demon to generate energy from nothing. (even knowing the laws and initial condition, both things already enough hard to know)

This thought experiment can link how a reduction in entropy, by mean of computers, could in some way bounded by Halting Problem, as a problem to anticipate in general the outcomes.

(Sometime all limits seems to be the same limit..)

More about Particle Laws

Particle's laws are not the main issue of this thought experiment, those laws could be quantum or classical, but we must have into account the fact of complexity of laws and initial conditions, the complexity of the arrange of particles is not bounded, and it could have a lot of added complexity (in an extreme example of initial conditions you could even insert a whole computer firing particles according to an internal source code and give that code to the daemon).

Hernan_eche
  • 723
  • 1
  • 7
  • 23