On modern architectures, random number generators get seeded by the current system time as a source of randomness, which is nice because it is kind of unpredictable when a process will switch to the current process. When this timer is in milliseconds or nanoseconds for example, you will get a good random number seed. But the time datatype is usually a 64-bit integer, so in problems concerning randomized algorithms algorithms, can we assume that the RNG they use is simply a map $f: \Bbb{Z}/{2^{64}}\to \Bbb{Z}/{2^{64}}$? Or must we always assume that it's $f: \Bbb{N} \to \Bbb{Z}/2^n$ where $n$ is the word size, could be ${64}$?
I think the first version lends itself to better analytical methods because there is then really only a finite number of possible RNG's at ("inductive stage") $n$, where inductive stage simply refers to the obvious method of induction you could then employ on $n$ in proofs concerning randomized algorithms. For example, it is still an open problem whether or not any randomized polynomial-time algorithm can be derandomized into a deterministic polynomial-time algorithm.