2

It's my assumption right, if I say that double fault can and must occurs only in processor's kernel mode (or ring 0 for x86) when any exception (synchronous) happens and never else?

If answer is yes, in newer processors that are compatible with older ones, we can't use in code, that runs in kernel mode, already defined instructions (in newer CPU) if we want to preserve this compatibility in the reason of undefined instruction exception, is it right? And other one question. If CPU executes a code that runs in kernel mode, it must be presented in memory in the reason of page fault, isn't it so?

And my additional thought. Are there any benefits from that it will be implemented "internal INT enable bit" in status register that will be automatically sets and clears on interrupt/exception occurs and its return and if exception happens HW reads this bit and if is set, it jumps to exception handler address, otherwise ir jumps to double fault handler?

If it is architecture/OS dependent, I choose Linux on MIPS.

Sorry for my English.

2 Answers2

3

I have a lot of experience with the x86, but nothing on MIPS, sorry - but I believe that the following description applies for it anyway.

A Double Fault only occurs in a very particular circumstance: If, while attempting to start an exception handler, another exception occurs, then the first exception is abandoned and the Double Fault exception handler is called instead.

Here is an example of a simple fault. If code attempts to access invalid memory:

*(int *)0 = 0xdead;

then the CPU will detect the NULL pointer reference and try to start the Memory fault handler. This could be in User (ring 3) or Supervisor (ring 0) code, and no Double Fault would occur - the CPU would simply try to start the Memory fault handler.

Imagine though that the OS had a bug and the Memory fault handler itself was in invalid memory. Thus, while trying to start the Memory fault handler, another fault happened and the first fault couldn't be handled. The Double Fault handler would then be called. (If a fault occurs while trying to start the Double Fault handler, the x86 CPU simply shuts down with a Triple Fault. The PC hardware detects this and resets the CPU.)

I have repeatedly emphasised start because once a fault handler has started successfully, no Double Fault will occur anymore. The first fault has started being handled, and the CPU can now handle any new faults that may come along. The CPU doesn't "remember" that it is inside a fault handler and cause a Double Fault if a new fault happens.

Per your example, if a fault handler attempts to use an undefined opcode, then the Undefined Opcode fault handler would simply be called on that instruction. It is not a Double Fault, it is merely a fault within a fault.

Of course, if the Memory fault handler starts, and while processing it causes a Memory fault, then the Memory fault handler will be restarted. That might cause the same Memory fault again, which will restart the Memory fault handler - each time using more and more stack until finally the stack itself overflows, which on the x86 is a different fault handler.

1

The best answer I can give you, is that yes, it is correct that with most modern CPUs only code running in Kernel mode can trigger a double or triple fault. It is very rare (but not impossible) for a non-kernel initiated instruction to trigger a hard fault, due to ProtectedMode operation, which abstracts away physical addressing such that its no longer possible to branch into an invalid register address.

As such, yes, any machine code assembled for a CPU without ProtectedMode should be at least reassembled, if not modified, to work on a newer CPU.

From Wikipedia: https://en.wikipedia.org/wiki/Protected_mode#Virtual_8086_mode

Virtual 8086 mode Main article: Virtual 8086 mode

With the release of the 386, protected mode offers what the Intel manuals call virtual 8086 mode. Virtual 8086 mode is designed to allow code previously written for the 8086 to run unmodified and concurrently with other tasks, without compromising security or system stability.[29]

Virtual 8086 mode, however, is not completely backwards compatible with all programs. Programs that require segment manipulation, privileged instructions, direct hardware access, or use self-modifying code will generate an exception that must be served by the operating system.[30] In addition, applications running in virtual 8086 mode generate a trap with the use of instructions that involve input/output (I/O), which can negatively impact performance.[31]

Due to these limitations, some programs originally designed to run on the 8086 cannot be run in virtual 8086 mode. As a result, system software is forced to either compromise system security or backwards compatibility when dealing with legacy software. An example of such a compromise can be seen with the release of Windows NT, which dropped backwards compatibility for "ill-behaved" DOS applications.[32]

I hope that helps a little, and that if it is insufficient, that another can fill in any gaps in my understanding.

Frank Thomas
  • 37,476