Please look at this and.
Consider any thread. At any given time, he does something, and he does it for some reason, and slowness can be defined as the time he spends on bad reasons - he does not need to waste time.
Take a snapshot of the stream at a specific point in time. Perhaps this is in skipping the cache, in the instruction, in the statement, in the function called from the call instruction in another function called from another, etc., Up to call _main . Each of these steps has a reason that code analysis shows.
- If any of these steps is not a very good reason, and it can be avoided, this point in time does not need to be spent.
Perhaps at this time the disk is approaching a certain sector, so you can start some streaming data so that you can fill the buffer, so you can execute the read statement in the function, and this function is called from the site in another function, and the other one, etc. e., up to call _main , or something like the top of the stream.
Thus, the way to find bottlenecks is to find when the code spends time on bad reasons, and the best way to find it is to take snapshots of your condition. EIP or any other tiny part of the state is not going to do this because it will not tell you why.
Very few profilers "get." Those that do are stacked wall clock samples that report a percentage of the active time (not the amount of time, especially not "I" or "exceptional" time) on a line of code (not by function). One that does Zoom , and there are others.
Looking at where the EIP hangs is like trying to tell the time on the clock with just a second hand. The measurement functions are similar to trying to tell the time on a watch with some numbers missing. Profiling only during the CPU, and not during a locked time, is like trying to tell the time on a watch that randomly stops working for long periods. Concerned about the accuracy of the measurement, it is like trying time so that your lunch break is second.
This is not a mysterious item.
Mike dunlavey
source share