How does spinlock unfold in Linux / ARM64?

In the Linux kernel, arch_spin_lock () is implemented as follows:

static inline void arch_spin_lock(arch_spinlock_t *lock)
{
    unsigned int tmp;
    arch_spinlock_t lockval, newval;

    asm volatile(
    /* Atomically increment the next ticket. */
"   prfm    pstl1strm, %3\n"
"1: ldaxr   %w0, %3\n"
"   add %w1, %w0, %w5\n"
"   stxr    %w2, %w1, %3\n"
"   cbnz    %w2, 1b\n"
    /* Did we get the lock? */
"   eor %w1, %w0, %w0, ror #16\n"
"   cbz %w1, 3f\n"
    /*
     * No: spin on the owner. Send a local event to avoid missing an
     * unlock before the exclusive load.
     */
"   sevl\n"
"2: wfe\n"
"   ldaxrh  %w2, %4\n"
"   eor %w1, %w2, %w0, lsr #16\n"
"   cbnz    %w1, 2b\n"
    /* We got the lock. Critical section starts here. */
"3:"
    : "=&r" (lockval), "=&r" (newval), "=&r" (tmp), "+Q" (*lock)
    : "Q" (lock->owner), "I" (1 << TICKET_SHIFT)
    : "memory");
}

Note that the wfe command puts the processor in low power mode and expects the event register to be set. The ARMv8 manual indicates that an event is generated if the global monitor for PE is cleared (section D1.17.1). This should be done as part of the unlock. But let's look at the arch_spin_unlock () part:

static inline void arch_spin_unlock(arch_spinlock_t *lock)
{
    asm volatile(
"   stlrh   %w1, %0\n"
    : "=Q" (lock->owner)
    : "r" (lock->owner + 1)
    : "memory");
}

No sev !! So what triggers the WFE lock here?

PS: I was looking for any ARM64 build tutorials, but nothing worked. It would be great if someone had any suggestions. Thanks!

+4
source share
2

" ldaxrh %w2, %4\n"

wfe . , .

" stlrh %w1, %0\n"

. , , SEV .

+2

, , .

. B2-5 . , - . wfe.

0
source

All Articles