1

This may get slightly philosophical, but consider the following program:

bool lampState = false;
TimeSpan timeout = 1;
while (true)
{
    lampState = !lampState;
    Thread.Sleep(timeout);
    timeout = timeout / 2;
}

DotNet Fiddle link for C# code

This is obviously an implementation of Thomsons lamp in C#.

So the question is what is the value of lampState after 2 seconds of execution?

Lets assume that no physical law prevents us from constructing an infinitely fast computer. Static code analysis shows that we will never exit the while loop. Does this mean that we can have an algorithm that is arbitrary fast, but not infinitely fast, not even theoretically (because we run in to a contradiction?

Gilles 'SO- stop being evil'
  • 44,159
  • 8
  • 120
  • 184
Eiver
  • 153
  • 4

8 Answers8

12

No, it doesn't mean that.

The problem with Thompson's lamp is that a particular state of an abstract machine is not well-defined. This is nothing unusual. E.g., dividing by 0 also yields an undefined value, because any value we could choose would lead to inconsistencies with other definitions that we would like to keep - at any rate, more than the isolated, unimportant definition of 1/0.

Abstract machines can be implemented in real machines, but there are limits; one if them is that you have only finite matter, space and time available. Therefore, infinitely fast machines, just as infinitely big machines, cannot be implemented. This merely means that we cannot check the question by physical experiment.

There are many other machines that are infinitely large that we also can't build, but which at least yield a well-defined prediction (for instance, the machine that never does anything would leave the state at the initial value). Those machines cannot be tested by experiment, but they can be predicted theoretically. This machine cannot even be predicted, because it has been carefully defined to fall into an undefined niche of logics.

But this has nothing to do with the question how fast an algorithm can be. There are many algorithms of which we do not know whether (given enough time) they eventually terminate. Some of these may be testable by running them and waiting long enough, but since we don't know how long "long enough" is, physical experiment can only ever give the answer "yes", but never "definitely no".

Therefore you shouldn't be too worried about the fact that a certain machine can't actually be built. Some problems are better handled via experiment, and some via theory. Those that cannot be handled either way are vexing, but almost always they are far removed from any practical problem that we really need to solve.

Example: if mathematics weren't incomplete, i.e. it it were possible to determine the truth of any propositional formula via a single algorithm, mathematicians would be glad, but everyday life would not be much affected. Even knowing P = NP would do little; it is one thing to know that a prime can be factored in polynomial time, quite another thing to be actually able to do it.

vonbrand
  • 14,204
  • 3
  • 42
  • 52
Kilian Foth
  • 236
  • 2
  • 4
7

There is a problem with the idea of "infinitely fast." If speed of execution is the number of instructions per time unit, and the speed is infinite, it means that the time between instructions becomes zero.

That would mean that all the instructions execute simultaneously, but then one instruction can't use the results from the preceding, so the algorithm falls apart.

So my the answer to your final question would be yes.

Guge
  • 171
  • 2
3

In any given physical implementation, there's an incompressible minimum of time between two instructions. The interval of time between two runs of the loop is not the argument you pass to Thread.Sleep, but the sum of the time that Thread.Sleep spends in waiting plus the time it takes to execute the rest of the loop: the function call to Thread.Sleep, the calculation of the new timeout value, the calculation of the new lamp state. Even if the program simply had Thread.Sleep(0) all along, you'd have a tight loop that still takes nonzero time per iteration.

In a theoretical machine model, you can make the time it takes to run instructions as small as you like, but it's still nonzero. In fact, most models of computation don't actually care about time as defined by clock values; they only care about time in the sense that one thing happens after another: time is defined by the sequence of instructions. So, from a theoretical perspective,

while (true)
{
    lampState = !lampState;
    Thread.Sleep(timeout);
    timeout = timeout / 2;
}
print(lampState);

isn't all that different from

while (true)
{
    lampState = !lampState;
    timeout = timeout / 2;
}
print(lampState);

from which you can eliminate the unused variable timeout and get

while (true)
{
    lampState = !lampState;
}
print(lampState);

which is a perfectly well-defined program.

As long as you keep a model of sequential execution, where one thing happens after another, you still have to spend at least the time to perform lampState = !lampState at each iteration, so each iteration takes some time. The final instruction print(lampState) is unreachable — it'll never run. This is an important consideration when analyzing programs — for example, the snippet above matches the requirement “this program must not print the lamp state”.

Classical models of computation define the program's behavior by assigning a value of each variable to each clock tick — the behavior of the program is defined as $\ell : \mathbb{N} \to \{\mathtt{false},\mathtt{true}\}$ where $\ell(t)$ represents the value of lampState at clock tick $t$ and $\mathbb{N}$ is the set of nonnegative integers. Recall that time is defined as one thing happening after another; conventionally we represent the starting time of the program as $0$, and $\ell$ alternates between $\mathtt{true}$ and $\mathtt{false}$. You can choose a time scale; but things happening another define a time scale, where lampState = !lampState is the smallest discernible event, and in this timescale, $\ell$ alternates between $\mathtt{true}$ and $\mathtt{false}$ at each clock tick. Even if you choose a different clock, this one exists.

Now when dealing with theory, you can define a new meaning for the syntax. In the time scale defined by flipping the lamp taking one clock tick, “after the execution of the loop” means an infinite clock value, in the sense that this clock value is larger than any finite clock value — the clock tick represents an event that comes after all possible counts of execution of the loop body. Mathematicians have long thought about how you could have numbers that are larger than any finite number, and came up with infinite ordinals. Of course, infinite ordinals don't enjoy all the nice properties of finite ordinals; c'est la vie.

A sequence of finite ordinals (i.e. integers as we known them) that continues with infinite ordinals is called transfinite. So what we're looking here is at transfinite time.

(Note that I'm saying that time reaches infinite values, with the time spent per instruction remains constant. You could say that instead the time spent per instruction goes down to zero, and the time value remains finite. But all that does is change the time scale, not change the semantics — the notion of time defined by the succession instruction still exists, whether you call the time values $0,1,2,3,4,\ldots$ or $0,\frac{1}{2},\frac{3}{4},\frac{7}{8},\frac{15}{16},\ldots$.)

So what is the value of lampState when time reaches the smallest non-finite ordinal (which mathematicians write $\omega$)? You get to choose! It's theory, remember? When dealing with theory, definitions are purely arbitrary. Useful definitions, of course, are motivated by considerations such as practical applications, possibilities for further theoretical considerations, and aesthetics. It's hard to see what practical application defining $\ell(\omega)$ could have. Aesthetically, there's no reason to favor one value over the other.

And theoretically? All signs point to leaving the value undefined. The function $\ell$ defined by the instruction clock ($\ell(x) = \mathtt{true}$ if $x$ is odd, $\ell(x) = \mathtt{true}$ if $x$ is even) is not continuous at $\omega$. If you took calculus in college, you've probably studied continuity of functions of real variables. The same concept is also useful with other kinds of mathematical objets when studying computation, notably on posets to define the semantics of programs. A lot of reasoning relies on continuous functions. Defining $\ell(\omega)$ would be very much like defining the value of $0/0$: you can declare that $0/0 = 42$ if you like, but it won't buy you anything — that new definition won't help with any algebraic theorem about multiplication and division that only holds for nonzero values.

If you'd like to read more on this topic, models of computation with infinite time are called hypercomputation.

Gilles 'SO- stop being evil'
  • 44,159
  • 8
  • 120
  • 184
2

Disclaimer: This post is written by a layman, without any theoretical computer science knowledge beyond what is typically known to laymen programmers.

Readers seeking a formal approach should instead read about Hypercomputation.


My post does not directly answer the question (about Thomson's Lamp), but discusses the properties of an insanely fast computer, and hinting at that Thomson's Lamp is itself not possible to implement on a computer that is merely insanely fast, simply because it won't see its RTC increment beyond any time period required to make the first toggle of its light switch.

By definition, a Thomson's Lamp requires an RTC connected to the physical notion of time, which cannot be implemented if it has no access to any RTC or the outside world, and in particular cannot be implemented on top of software's own measurement of execution progress.


Instead of considering a literally infinitely fast computer, we first consider an "insanely, but still finitely fast" computer.

The computer may or may not have access to a "real time clock" (RTC). An RTC is a device accessible by the computer (and by the software) that is able to tell time as it passes through.


In the absence of any RTC, the only notion of progress (changes in states) would be the effects of the execution of the program:

  • The program must execute its own executions;
  • The program can perform arithmetic operations and memory operations
    • In particular, the program can implement an integer counter, and increment it one at a time.
      • See note on "optimization".
    • Or, it may implement a mathematical operation that is hard to optimize, so that the computer has to actually carry out the steps.

(Note on optimization.)

  • A for-loop that executes an increment on an integer variable for a million times, may be replaced by code that simply implement the same integer variable by one million, in a single step.
  • A for-loop that repeated evaluates the same expression, when every such evaluation can be shown (proven by a compiler) to give the same numerical result (with no other side-effects, i.e. changes to other program variables), it could be moved out of the for-loop.
  • A program that is originally written to first performs some heavy-lifting (known to consume some amount of time), and then sets a flag to indicate "completion", may be transformed into another program that sets the "completion" first, and then chuck along the heavy-lifting.
  • There are many others; but for every computer to be a computer, there must be some kind of operations, or workloads, for which it must (had to) carry out multiple steps, without the possibility of obtaining the calculation result in a magically faster way.

The subsequent discussion will focus on the use of such irreducible computation operations to observe the notion of progress.



For the sake of time, now we go back to the thing mentioned earlier - the RTC, or any time-telling device.

As the computer gets more and more insanely faster, it will observe that the RTC "goes slower" in the relative sense. That is, if the computer is set to perform a possibly unbounded number of operations in a loop (and count the numbers of operations performed) until the RTC has shown a certain amount of time has elapsed, a faster computer will find that it could run more operations in that time period.

So, a truly theoretically infinitely fast computer will:

  • If its own progress-counter has a finite precision, it will overflow before any amount of real-world time has elapsed.
    • This is no laughing matter. The standard library (runtime) of the famous Turbo Pascal 5, 6 and 7 had an infamous RTC speed calibration overflow that is exactly like this. It essentially turned Turbo Pascal into an abruptly obsolete programming environment by the turn of the century.
  • If the program implements a progress-counter using arbitrary-precision integers, one which allocates more memory to account for the need for bits to represent the counter's increasingly larger values, then as Pieter B mentioned in a comment - it will consume memory at a rate proportional to O(log Freq). The eventual outcome depends on whether the computer has finite or infinite memory.
  • If the program does nothing but just repeatedly querying the RTC, it will see that the RTC never seems to move.
    • Again, this is no laughing matter. Search "dead RTC".
rwong
  • 121
  • 3
1

Many years ago I was asked for help by a mate who studied economics. His professor had developed a model for work per time: After you perform X units of work, your experience grows, and therefore you can perform one unit of work Y percent faster. And my mate couldn't wrap his head around it.

Since you ask about theoretical, a model of computation made by a professor of economics surely counts. And with his model, you can easily figure out that you can perform any amount of work in some fixed time.

Let's translate this into computers. Assume you have a computer that today runs with a clock speed of 1GHz. And every trillion clocks the clock speed increases by exactly 1 percent. With this model, your computer can do an unlimited amount of work in exactly 101,000 seconds. (Note that the time until the clock speed doubles gets exponentially shorter). Most of the work will be done in the last second.

gnasher729
  • 32,238
  • 36
  • 56
1

The universe is finite, not infinite. Any finite volume of space contains only a finite amount of information.

Your machine would require an infite amount of states. Therefore your idea can never work. Even with infinite time.

Pieter B
  • 139
  • 2
1

Oh that's easy! It's false.

We just need to know how many computations occur and determine whether it's an even or odd number! So ...

If we have,

t = time for a single Loop
N = Number of loops
T = Total execution time

And we know that the basic relationship between these variables is:

t * N = T

And since an infinitely fast computer must be capable of executing an infinite number of loops N in any T, it follows that a value t must satisfy N = <infinity> with any T:

t = T / <infinity>

Hence, our time per loop t is necessarily 0:

0 = T / <infinity>

And now things become clear, t is actually a constant. And if we plug our constant t into the initial equation:

0 * N = T

We see that for any N, T is also always 0!

So, regardless of our total runtime limit T, the actual runtime is always 0. Substituting 0 for T:

t * N = 0

And with a little basic algebra:

N = 0 / t

For any value t, the total number of executions is 0. And the state of lampState remains unchanged.

And this can be proven empirically. By simply setting the loop-cycle duration t on your own computer to 0 (turn the power off), you'll find that it does absolute nothing.

svidgen
  • 119
  • 3
1

Malament-Hogarth space-times allow for infinite computations to be performed in finite time (in the observer's frame of reference), by essentially placing a classical computer into a black hole and then retrieving it. Practically, there are some difficulties in extracting intact objects from a black hole, but the theoretical framework is there.

Glorfindel
  • 754
  • 1
  • 9
  • 20