23

The question is basically in the title. I know that a computers hardware is of course some physical object, where as the interpreter is some abstract thing that does something abstract with the code it is given. Yet, can we say that the way the processor is build is the implementation of an interpreter?

Machine language is a sequence of physical states (0s and 1s ) on some physical memory, and if this physical states order follows certain rules (the syntax of the machine language) then the processor is build in a way that naturally leads to performing calculation steps (and changing some of the memory), e.g. "run" the programm.

As this question's answer points out, compilers translate from one language to another, and interpreters for each language "run" the programm. It would be just consistent if this stretches down to machine language as well, and that's why I'm asking.

If one can make the analogy, when would it break down? What features of language semantics (given by the interpreter), for example expressions and values, are there that we can't find at the physical level anymore? In what way does the processor behave differently from what is expected of an interpreter?

Quantumwhisp
  • 341
  • 2
  • 9

11 Answers11

34

It's not such a bad way of looking at things.

On most modern CPUs, the instruction set architecture (ISA for short) is abstract, in the sense that it doesn't dictate that it must be implemented using certain hardware techniques or it is not a compliant implementation. Nothing in the ISA specification specifies whether or not it uses register renaming, or branch prediction, or whether vector instructions are parallel or pipeline-streaming, or even whether the core is scalar or superscalar.

Indeed, on many modern CPUs, there is a certain amount of translation from the ISA into an internal representation to be executed more efficiently, such as Intel's micro-operations (uOps for short).

Pseudonym
  • 24,523
  • 3
  • 48
  • 99
8

No, it would not be wrong. Indeed, we can use the Futamura projections to help us understand the situation.

The first Futamura projection says that if we specialize hardware with some native machine code, then we get a standalone executable. For example, classic video-game consoles combined an interpreter in the console with native code in each game cartridge, so that plugging a cartridge into a console resulted in an executable system. As other answers point out, computability theory implies that we can write (slow) software which is equivalent to the hardware in effects, and indeed emulators exist for most classic consoles.

Edit: A Futamura specializer is a specific sort of program which takes two unevaluated programs and returns their composition. Sometimes it is called a Kleene compositor due to its relationship with Kleene's parameter theorem (the $S^m_n$ theorem). For more details, see Piponi (with helpful pictures); for even more details, see "Partial Evaluation and Automatic Program Generation". In older computers, inserting a cartridge would close electronic circuits, connecting daughterboard to motherboard and allowing all of the chips in the cartridge to communicate with the chips in the host. This is Futamura specialization; a piece of non-functional hardware was added to existing hardware in order to create a system which is limited in functionality but relatively fast at execution.

The second Futamura projection says that if we specialize the specialization process itself with the interpreter as the curried argument, then we get a compiler. I'll extend the video-game example. A local bar has a large selection of classic games; if we ask the bartender to load a cartridge, then the bartender will allocate a console and deliver controllers. In essence, the bartender is a Futamura compiler.

Less facetiously, we could imagine a compiler in the traditional sense, compiling high-level programmer expressions to native code, but generated using Futamura projections. If this compiler's underlying interpreter is some hardware, then this implies that the compiler will directly call that hardware in order to partially evaluate the static parts of its input programs. And, if one thinks about it for a moment, any partially-evaluating compiler is doing something morally equivalent to this; Futamura would just require that the emulation of instructions is done in a faithful way, using instructions equivalent to the original hardware.

Corbin
  • 240
  • 2
  • 9
7

You could look at it that way. But traditionally, the term "interpreter" is usually used to refer to software, not hardware.

There are CPU emulators, these can definitely be considered to be interpreters of the processor machine language.

The distinction between hardware and software implementations is useful in practice because it dictates how easy it is to make changes. If there's a bug in a hardware CPU, you will likely have to replace the computer or processor chip to address it (unless there are software workarounds). But if there's a problem in a software interpreter, you can just install an update (which has become easier over the years, as digital media became more compact and then migrated to Internet distribution). This means that hardware development requires more stringent specifications and testing -- bugs that make it to customers are much more serious and costly to resolve.

Barmar
  • 473
  • 2
  • 9
4

You can build any program as a circuit, check e.g. most texts on computability, they often discuss circuit complexity for solving problems in addition to the more traditional Turing machine models (closer to normal programs). For some problems it makes economic/performance sense to create a custom circuit (look e.g. at FPLGA), most of the time it is much cheaper/flexible to just write a program for a more or less general purpose CPU.

If your CPU is build as a fixed circuit, is a microprogram for a more basic CPU, programmed into a FPLGA, or as a program running on some CPU is really irrelevant. All are doing the job of interpreting the instructions.

Bergi
  • 610
  • 3
  • 12
vonbrand
  • 14,204
  • 3
  • 42
  • 52
4

If one can make the analogy, when would it break down? What features of language semantics (given by the interpreter), for example expressions and values, are there that we can't find at the physical level anymore? In what way does the processor behave differently from what is expected of an interpreter?

It's a bit of a semantic argument, but "language semantics" themselves don't exist at the physical level; there are physical states and transformations to which we assign a meaning.

You can generally find values in a processor all the way down to the bit level of individual transistors. However expressions get dematerialised. Instead you will find a set of enable and selector bits that route the values to a logic block which computes a function, then routes the result to a destination.

There are also processor features that don't map cleanly to language features. Out-of-order execution and speculative execution, for example. Speculative execution in particular is supposed to be entirely hidden from the programmer's model but in some circumstances has been used as a security exploit (SPECTRE).

Interrupts and aborts are also outside the simple machine code model, as they provide forced control flow transfer to a different bit of machine code.

pjc50
  • 421
  • 2
  • 4
4

If you look too much at words, they'll scurry away. In general the boundaries on meanings of words are soft.

As others said, you are not wrong. But, the word interpreter is usually used precisely to mean "not the CPU".

How does an interpreter effectively carry out its interpretation? What is it interpreting on? (Or into?)

As the word is customary used, an interpreter is a bunch of instructions carrying out some other bunch of instructions (the program), but the effective execution of the interpreter code is done by something else (the CPU).

I do not know the details of CPUs inner workings, but at some point (say, after microcode translation) "instructions" turn into activation of different parts of the CPU, data turns into state of internal registers, some circuits are activated, producing (physical) internal states transitions, some internal communication lines (buses) are activated or disabled, states are moved from one register to another or to memory, [much more complicated real stuff omitted], and that's execution.

Pablo H
  • 211
  • 1
  • 6
3

TLDNR: No, not wrong.

The words we use are part of communication: you think - you say - I hear - I interpret. Depending on the context the word "interpreter" may be understood quite differently. So be careful when talking and writing, we do have different bakgrunds and different ways of interpreting words.

In my world, an interpreter is most often a software program so I could become confused. But let´s dive a little deeper.

.1 Executing the machine code: Yes the hardware fetches bits from memory and "interprets" them as code and then does what the code says, perhaps multiplying the values in two registers. A software interpreter would do the same. The hardware does some extra things a "pure" software cannot really do, such as consuming power from the power source and creating intended and unintended signals and electrical noise. It is not unusual for hardware designers to experience bugs due to these unintended things, especially in the edge case of supply voltages or temperatures.

NOTE: both hardware and software interpreters may container bugs of course, but that is a different story...

.2 Executing the machine code in detail: You might know of the Intel CPU-s used in a lot of computers, say the 10th generation i9. This computer executes x86 instructions (I am simplifying here, but bare with me). The x86 instructions are actually "translated" and "reordered" inside the i9 CPU before actually beeing executed. This translation process, from one assembly language to the interna language of the processor may be seen as an interpreter if you wish. The reason for the complicated multistage pipeline with translation and reordering is in order to gain performance.

.3 Conclusion: No, you are not wrong to see the hardware as an interpreter. But beware that this is not the common language usage, so you may end up confusing the listener or reader. I would stay away from using words that way, but that is me.

ghellquist
  • 426
  • 2
  • 8
3

You can call everything everything - in 2021, all of our components (CPUs, machine code, compilers, interpreters) share so many aspects, that you'll find an argument in almost any direction. Maybe it would not be quite wrong to call a CPU an interpreter, but I feel it would be very weird and discount what actual interpreters are about in the first place.

An all-purpose CPU stupidly executes its instructions. If we ignore advanced current-days aspects of lookahead and heavy parallelization - i.e., think of an old, extremely simple, 6502 CPU instead of a current Intel or ARM offering - then you see that the CPU looks at one specific point in RAM or ROM, fetches one instruction, then executes that extremely simple instruction. There is nothing between this execution and the electrons running around on the hardware - at this low level, the machine code is the electronic layer.

So, no, in general I would say the processor is an executor, not an interpreter.

Modern aspects could change this. For example, you could create a CPU which understands CISC commands, but does this by running a microcode preprocessor inside the CPU which translates those CISC commands into RISC commands, which then are executed on the actual machinery of the CPU. In this case you could say the the CPU interprets CISC code. I would accept this especially in the case that this preprocessor has larger state than just the next command; i.e. if it does complex decisions based on previous and next statements. This also leads to branch prediction, parallelization, power management and so on - complex "management" efforts which may be reordering or optimizing the stream of incoming machine commands. This "preprocessor" then would be an interpreter, shoveling commands to its internal execution engine.

An actual interpreter earns its designation by looping over some more abstract representation of a program (sometimes this can be the source code directly, for example in certain scripting languages close to a shell; sometimes bytecode which can be generated upfront (e.g. Java) or on the fly (e.g. Ruby, Perl)). A common theme is that this bytecode is often compatible between different interpreters; famously you can run Java bytecode anywhere from big servers to embedded little machines, which may have nothing much in common on the machine language level.

Finally, all of this is a moving target anyway, since on each level of abstraction, we are combining all kinds of techniques. Your Java interpreter might have a Just-in-time compiler and your pieces of "bytecode" might in fact just be a compiled-in-the-meantime machine code. Your state-of-the-art CPU might have an interpretion layer for compatibility for some generation of CPU 20 years ago. Your compiler might have a compile-time interpreter able to modify compilation in some way, etc.

AnoE
  • 1,303
  • 8
  • 10
0

Yes, it would be wrong.

While it is plausible to implement compilers/interpreters in hardware, via application-specific hardware accelerators, it would not realize general-purpose computer systems (or simply, computers), or general-purpose processor architectures.

A computer system includes the interpretation of symbols based on a specified set of rules (i.e., the software component), using computer hardware (typically electronics, but it can be mechanical or electromechanical). An example of computers is the Turing machine, which can be realized as a mechanical system processing tapes of symbols.

Since processors can execute many software, from operating systems to compilers/interpreters to application software (from Web browsers to text editors), processors are not the implementation of compilers/interpreters.

What you critically missed is the notion/concept of the instruction set architecture (ISA) that serves as the hardware/software interface [Patterson2021].

The compiler and interpreter enables software to be compiled into machine code of a specific ISA (via the code generation step), so that this machine code can be run on any processor implementing that specific ISA (i.e., processor architecture or microarchitecture) [Patterson2021].

Reference:

@book{Patterson2021,
    Address = {Cambridge, {MA}},
    Author = {David A. Patterson and John L. Hennessy},
    Edition = {Second {RISC-V}},
    Publisher = {Morgan Kaufmann},
    Series = {The Morgan Kaufmann Series in Computer Architecture and Design},
    Title = {Computer Organization and Design: The Hardware/Software Interface},
    Year = {2021}}
Giovanni
  • 71
  • 3
0

You can feel free to call a processor an interpreter for its machine language. Except that it is a very, very fast interpreter.

Take an Intel and an ARM processor. You could call the ARM processor an interpreter for the ARM machine language. You can also use an Intel processor to build such an interpreter. Except it would probably be an awful lot slower.

gnasher729
  • 32,238
  • 36
  • 56
-1

If the hardware alone can interpret anything, yes. But with modern electronic digital data-processing the software is separate from the hardware. Even the BIOS or UEFI uses code.

In the mid-Twentiethh Century, the programmer actually rearranged the memory. So the memory was the program. There was no storage, per se. That was WHEN the Chairman of IBM speculated, given time, it might be possible to build a computer that weighed under 5 tons!

Given that he also predicted the world market might reach 8, eight computers in the entire world that is, he was a little off! So, if you have no code in the BIOS cmos memory, nothing stored on anything, just hardware and electricity, and 'that' can interpret; the answer to your question is 'yes'.

Otherwise, it is 'no'.

Brian
  • 107
  • 2