When learning about the architecture of computers and how it works, we are thought that the lowest language that we can find that the machine understands is binary as 1&0. And anything that we input will have to be transformed/converted to binary, but being binary numbers wouldn't that mean that we would need another interpreter/compiler to transform binary into actual machine language? We all know from electronics that a computer is mainly composed of cpu which is an IC that is therefore made out of transistors etc, and the only thing that those tools understand is electricity, therefore electricity will be the lowest understandable language for a computer. So my concern is, is binary really 1s&0s or the 1s&0s are just used to represent the absence and or presence of electricity? Supposing that it's just a representation for absence or presence of electricity, wouldn't there be another intermediate or even lower language between the commands that we input and binary, so that the circuits would know where to send the current to and where not to?
3 Answers
Digital computers work so that (almost) at any given point in time, any wire carries (roughly) one of two possible voltages, one signifying $0$ and the other signifying $1$. The voltages depend on the convention being used. In this sense, digital computing does work with $0$s and $1$s. However, even digital computers interface with analog devices, such as physical storage and networks. The way that data is encoded in networks can be somewhat different, since several bits can be encoded at once, depending on the encoding.
Let me explain my qualifications in the first sentence above. "Almost" refers to the fact that when wires switch from $0$ to $1$ or vice versa, there will be intermediate voltages. These switches are synchronized across all the wires so that whenever wires are "read", the switch (if any) has already occurred. "Roughly" refers to the fact that the voltages are not exact. There are two small ranges of voltages which correspond to $0$ and $1$. The devices should be able to "read" bits throughout these ranges, but usually "write" them more restrictively.
- 280,205
- 27
- 317
- 514
the question is not exactly clear & has some misconceptions or misapplication of terminology (eg "lower language") but interpreting it in a more general/ metaphorical/ analogical/ loose way, yes:
- one interesting case study here is the logical flip flop (see also How to understand the SR Latch) which because it has a feedback loop, cannot really be analyzed logically in terms of 0's & 1's and has to be analyzed more as an analog device
- the timing of the circuit is crucial in understanding the dynamic nature of the flip flop. an esp useful way of understanding it is a voltage/time plot.
- "beneath" 0's and 1's of logical circuits are continuous/nondiscrete analog voltages, and binary electronics can be said to tightly "control" the analog signals into highly constrained boundaries.
- there are cases where this "breaks down" eg for example on an IC chip, the electronics of defective gates or marginal designs can cause the chip to fail and fall out of the binary "envelope".
- another basic component of non-binary electronics is noise. much of the IC design can be regarded as controlling/minimizing it. in fact binary electronics can be seen as a means of eliminating noise from circuitry to achieve "noiseless" signals.
You are asking several different things into one question. Maybe it's a good idea to untangle the problem for yourself. You are talking about physics, definition of the industry and software/computer architecture.
that the lowest language that we can find that the machine understands is binary as 1&0. And anything that we input will have to be transformed/converted to binary.
Let's do this part first. Modern-day computers are based on electrical circuits. For example most ARM processors can run at 5 volts. Because of the way the ARM processor is build in the fabric the processor understand a ~0 volt signal as a 0 bit and ~5 volt as a 1 bit. It is the definition chosen by someone.
but being binary numbers wouldn't that mean that we would need another interpreter/compiler to transform binary into actual machine language?
I do not completely understand what you are asking here. But let's say you are looking from a software perspective. Then, the program eventually will be transformed in actual lines of bits and a line of bits which tell the CPU what to do with the bits. These are fed into the CPU (processing unit) and based on the processor architecture a action will be performed on the data.
We all know from electronics that a computer is mainly composed of cpu which is an IC that is therefore made out of transistors etc, and the only thing that those tools understand is electricity, therefore electricity will be the lowest understandable language for a computer. So my concern is, is binary really 1s&0s or the 1s&0s are just used to represent the absence and or presence of electricity?
As I mentioned a little bit above, it is the way computers are built. You can run the processor on other voltages but it won't work (I think). Because the a particular definition is made about at what voltage a bit is 0 or 1.
Supposing that it's just a representation for absence or presence of electricity, wouldn't there be another intermediate or even lower language between the commands that we input and binary, so that the circuits would know where to send the current to and where not to?
Yup, basically analog signals. But keep in mind most of the time these are converted by an ADC (analog digital converter) into a bit value which represents a predefined voltage on that line.
- 82,470
- 26
- 145
- 239
- 11
- 6