Understanding the relationship between CONFIG_SMP, Spinlocks and CONFIG_PREEMPT in the latest (3.0.0 and later) Linux kernel

To give you full context, my discussion began with the observation that I am running SMP linux (3.0.1-rt11) on ARM cortex A8 based on SoC, which is single-processor. I was wondering if there would be any performance advantage by disabling SMP support. And if so, what impact will this have on my drivers and interrupt handlers.

I read something and came across two related topics: spinlocks and core preemption. I did a bit of search engine work and read, but this time all I have is a few outdated and conflicting answers. So I thought that let me try stackoverflow.

The origin of my doubts / questions is a couple of Linux device drivers 3rd edition, chapter 5:

Spinkins are inherently intended to be used on multiprocessor systems, although a single-processor workstation running on a proactive Kernel behaves like an SMP in relation to concurrency. If the unresponsive single-processor system has ever been locked, it will spin forever; no other thread can ever get the CPU to release the lock. For this reason, spin-lock operations on uniprocessor systems without prior activation are optimized to do nothing, except for those that change the IRQ masking. Because of the premise, even if you never expect your code to run on an SMP system, you still need to implement the correct lock.

My doubts / questions:

a) Is the Linux kernel a priority in the default kernel space? If so, is this limitation an advantage only for processes or interrupt handlers can also be preempted?

b) Does the Linux kernel (on ARM) support nested interrupt? If so, will each interrupt handler (upper half) have their own stack or will they use the same 4k / 8k kernel mode stack?

c) If you disable SMP (Config_SMP) and preemption (Config_preempt), will the locks in my drivers and interrupt handlers make any sense?

d) How do kernel descriptor interruptions occur when the upper half is executed ie will they be disabled or masked?

After some googling, I found this:

For kernels compiled without CONFIG_SMP and without CONFIG_PREEMPT, spin locks do not exist at all. This is a great design decision: when no one can work at the same time, there is no reason to block.

If the kernel is compiled without CONFIG_SMP, but CONFIG_PREEMPT set, then spinlocks simply disable preemption, which is enough to prevent any races. For most purposes, we can think of the equivalent of SMP, and not worry about it separately.

But there is no kernel version or date in the source. Can anyone confirm that it is still valid for the latest Linux kernels?

+7
source share
1 answer

a) Whether Linux is proactive or not depends on whether it is configured or not with CONFIG_PREEMPT . By default, no. If you run make config , you will need to choose.

b) Interrupts run on Linux; while interrupts are being processed, other interrupts may disappear. This is true for ARM and many other architectures. All this in one stack. Of course, user space stacks are not used for interrupts!

c) If you disable SMP and preemption, the spin locks of your code will decrease to no-op if they are regular direct gates, and the IRQ locks ( spin_lock_irqsave / spin_lock_irqrestore ) will turn into an disable / enable interrupt. Therefore, the latter are still necessary; they prevent races between tasks that run your code and interrupts that run your code.

d) β€œTop half” traditionally refers to interrupt service procedures. The top half of the driver code is controlled by interrupts. The lower half is called tasks (for reading or writing data or something else). Interrupt handling information is architecture specific.

Recently, I worked very closely with Linux interrupts in a specific MIPS architecture. On this particular board there were 128 broken lines masked by two 64-bit words. The kernel implemented a priority scheme on top of this, so before executing the handler for this interrupt, the lower ones were masked by updating these 2x64-bit registers. I implemented a modification so that interrupt priorities can be set arbitrarily, not by the position of the hardware and dynamically, writing values ​​to the /proc entry. Moreover, I invested hacks, as a result of which part of the IRQ numerical priority overlapped with the priority of tasks in real time. Thus, RT tasks (i.e. user space threads) assigned to a specific range of priority levels could implicitly suppress a specific range of interruptions during operation. This was very useful in preventing interference from mission-critical violations (for example, the interrupt routine in the IDE driver code used for a compact flash that performs busy loops due to a poorly designed hardware interface becomes the de facto highest priority in the system. ) One way or another, the IRQ masking behavior is not written in stone if you control the core used by clients.

The indicated sentences in the question are true only about regular spin-blocks ( spin_lock function / macro), and not IRQ overlocks ( spin_lock_irqsave ). In a privileged kernel on a single-processor spin_lock system, spin_lock just need to disable continuity, which is enough to leave all other tasks outside the kernel until spin_unlock . But spin_lock_irqsave should disable interrupts.

+6
source

All Articles