Because...
- many programs do not use floating point or do not use it on any given time fragment; and
- maintaining registers of FPUs and other state of FPUs takes time; therefore
... the OS kernel can simply disable FPU. Presto, there is no state to save and restore and, therefore, faster context switching. (This is what mode means, it just means that the FPU is on.)
If the program tries to perform the FPU operation, the program enters the kernel, the kernel turns on the FPU, restores any saved state that may already exist, and then returns to perform the FPU operation again.
When switching the context of time, he knows that he needs to go through the logic of saving state. (And then he can disable FPU again.)
By the way, I believe that the explanation of the book because the kernels (and not just Linux) avoid FPU operations ... is not entirely accurate. one
The kernel can cling to itself and do this for many things. (Timers, page errors, device interruptions, etc.). The real reason is that the kernel does not need FPU operators and also needs to run on architectures without FPUs. Therefore, it simply avoids the complexity and runtime required to manage its own FPU context without performing operations for which other software solutions always exist.
It is interesting to note how often the state of the FPU should be saved if the kernel wanted to use FP .,. every system call, every interrupt, every switch between kernel threads. Even if there was a need for a random FP kernel, 2 most likely, it will be faster to do this in software.
1. That is, wrong.
2. There are several cases where I know where the kernel software contains a floating-point arithmetic implementation. Some architectures implement traditional FPU operating systems on hardware, but leave some complex IEEE FP operations for software. (Think: abnormal arithmetic.) When an odd case arises with IEEE, they end up in software that contains pedantic correct emulation of operating systems that may trap.DigitalRoss Dec 14 2018-12-12T00: 00Z
source share