In any given physical implementation, there's an incompressible minimum of time between two instructions. The interval of time between two runs of the loop is not the argument you pass to Thread.Sleep, but the sum of the time that Thread.Sleep spends in waiting plus the time it takes to execute the rest of the loop: the function call to Thread.Sleep, the calculation of the new timeout value, the calculation of the new lamp state. Even if the program simply had Thread.Sleep(0) all along, you'd have a tight loop that still takes nonzero time per iteration.
In a theoretical machine model, you can make the time it takes to run instructions as small as you like, but it's still nonzero. In fact, most models of computation don't actually care about time as defined by clock values; they only care about time in the sense that one thing happens after another: time is defined by the sequence of instructions. So, from a theoretical perspective,
while (true)
{
lampState = !lampState;
Thread.Sleep(timeout);
timeout = timeout / 2;
}
print(lampState);
isn't all that different from
while (true)
{
lampState = !lampState;
timeout = timeout / 2;
}
print(lampState);
from which you can eliminate the unused variable timeout and get
while (true)
{
lampState = !lampState;
}
print(lampState);
which is a perfectly well-defined program.
As long as you keep a model of sequential execution, where one thing happens after another, you still have to spend at least the time to perform lampState = !lampState at each iteration, so each iteration takes some time. The final instruction print(lampState) is unreachable — it'll never run. This is an important consideration when analyzing programs — for example, the snippet above matches the requirement “this program must not print the lamp state”.
Classical models of computation define the program's behavior by assigning a value of each variable to each clock tick — the behavior of the program is defined as $\ell : \mathbb{N} \to \{\mathtt{false},\mathtt{true}\}$ where $\ell(t)$ represents the value of lampState at clock tick $t$ and $\mathbb{N}$ is the set of nonnegative integers. Recall that time is defined as one thing happening after another; conventionally we represent the starting time of the program as $0$, and $\ell$ alternates between $\mathtt{true}$ and $\mathtt{false}$. You can choose a time scale; but things happening another define a time scale, where lampState = !lampState is the smallest discernible event, and in this timescale, $\ell$ alternates between $\mathtt{true}$ and $\mathtt{false}$ at each clock tick. Even if you choose a different clock, this one exists.
Now when dealing with theory, you can define a new meaning for the syntax.
In the time scale defined by flipping the lamp taking one clock tick, “after the execution of the loop” means an infinite clock value, in the sense that this clock value is larger than any finite clock value — the clock tick represents an event that comes after all possible counts of execution of the loop body. Mathematicians have long thought about how you could have numbers that are larger than any finite number, and came up with infinite ordinals. Of course, infinite ordinals don't enjoy all the nice properties of finite ordinals; c'est la vie.
A sequence of finite ordinals (i.e. integers as we known them) that continues with infinite ordinals is called transfinite. So what we're looking here is at transfinite time.
(Note that I'm saying that time reaches infinite values, with the time spent per instruction remains constant. You could say that instead the time spent per instruction goes down to zero, and the time value remains finite. But all that does is change the time scale, not change the semantics — the notion of time defined by the succession instruction still exists, whether you call the time values $0,1,2,3,4,\ldots$ or $0,\frac{1}{2},\frac{3}{4},\frac{7}{8},\frac{15}{16},\ldots$.)
So what is the value of lampState when time reaches the smallest non-finite ordinal (which mathematicians write $\omega$)? You get to choose! It's theory, remember? When dealing with theory, definitions are purely arbitrary. Useful definitions, of course, are motivated by considerations such as practical applications, possibilities for further theoretical considerations, and aesthetics. It's hard to see what practical application defining $\ell(\omega)$ could have. Aesthetically, there's no reason to favor one value over the other.
And theoretically? All signs point to leaving the value undefined. The function $\ell$ defined by the instruction clock ($\ell(x) = \mathtt{true}$ if $x$ is odd, $\ell(x) = \mathtt{true}$ if $x$ is even) is not continuous at $\omega$. If you took calculus in college, you've probably studied continuity of functions of real variables. The same concept is also useful with other kinds of mathematical objets when studying computation, notably on posets to define the semantics of programs. A lot of reasoning relies on continuous functions. Defining $\ell(\omega)$ would be very much like defining the value of $0/0$: you can declare that $0/0 = 42$ if you like, but it won't buy you anything — that new definition won't help with any algebraic theorem about multiplication and division that only holds for nonzero values.
If you'd like to read more on this topic, models of computation with infinite time are called hypercomputation.