One use case might be for testing concurrent programs, try to find alternations that detect flaws in your synchronization patterns. For example, in Java:
A useful trick to increase the number of interlaces and therefore more effectively explore the state space of your programs, to use Thread.yield to stimulate the use of additional context switches during the operation with access to the general state . (The effectiveness of this method is platform specific, since the JVM can treat THRead.yield as no-op [JLS 17.9]; using short but non-zero sleep will be slower but more reliable.) - JCIP
Also interesting from a Java perspective is that their semantics are undefined:
Semantics Thread.yield (and Thread.sleep(0) ) undefined [JLS 17.9]; The JVM is free to execute them as non-ops or treat them as scheduling tips. In particular, they are not required to have sleep semantics (0) on Unix systems passing the current thread at the end of the execution queue for this priority, leading to other threads while some JVMs implement the output in this way. - jcip
This makes them, of course, unreliable. This is very specific to Java, but in general I believe the following is true:
Both are low-level mechanisms that can be used to influence planning . If this is used to achieve certain functionality, then this functionality is based on the likelihood of an OS scheduler, which seems like a pretty bad idea. This should be driven by higher level synchronization constructs.
For testing purposes or to force a program to a certain state, it seems to be a convenient tool.
source share