You cannot appreciate it. You need to measure it. And it will vary depending on the processor in the device.
There are two fairly simple ways to measure the context switch. One includes code, the other does not.
Firstly, the code path (pseudocode):
DWORD tick; main() { HANDLE hThread = CreateThread(..., ThreadProc, CREATE_SUSPENDED, ...); tick = QueryPerformanceCounter(); CeSetThreadPriority(hThread, 10); // real high ResumeThread(hThread); Sleep(10); } ThreadProc() { tick = QueryPerformanceCounter() - tick; RETAILMSG(TRUE, (_T("ET: %i\r\n"), tick)); }
Obviously, doing this in a loop and averaging would be better. Keep in mind that this is not just a context switch dimension. You also measure the call to ResumeThread, and there is no guarantee that the scheduler will immediately switch to your other thread (although priority 10 should help increase the likelihood that it will).
You can get a more accurate measurement using CeLog by connecting to the events of the scheduler, but it is far from easy to do and not very well documented. If you really want to go this route, Sue Low has several blogs that a search engine can find.
The non-coding route will be using the Remote Kernel Tracker. Install eVC 4.0 or the eval version of Platform Builder to get it. This will give a graphical representation of everything the kernel does, and you can directly measure the thread context switch with the cursor options provided. Again, I'm sure Sue has a blog post on using Kernel Tracker.
All that said, you will find that the context switches for the in-process software flow are actually very fast. This is a process that is expensive because it requires the replacement of an active process in RAM and subsequent transfer.
ctacke Nov 20 '08 at 14:07 2008-11-20 14:07
source share