I want to get the maximum counter. I need to execute a loop so that it executes x milliseconds.
First off, just don't do it. If you need to wait a certain number of milliseconds is not busy - wait in a loop . Rather, start the timer and return. When the timer marks, call a method that resumes where you left off. The Task.Delay method may be good to use; he takes care of the timer details for you.
If your question is actually about how to gain time spent by some code, you need much more than just a good timer. To obtain accurate timings, there are many works of art and science.
First you should always use Stopwatch and never use DateTime.Now for these timings. The stopwatch is designed for a high-precision timer telling you how much time has passed. DateTime.Now is a low-precision timer that tells you whether itโs time to look at Doctor Who else. You would not use a wall clock during the Olympic race; you would use the most accurate stopwatch you could handle. Therefore, use the one provided to you.
Secondly, you need to remember that C # code compiles Just In Time. Thus, the first time you go through a loop can be hundreds or thousands of times more expensive than each subsequent time because of the cost of jitter analyzing the code that causes the loop. If you intend to measure the โwarmโ cost of a cycle, you need to start the cycle once before starting the countdown. If you intend to measure the average cost, including the jit time, then you need to decide how many times a reasonable number of trials is in order to get the average value correctly.
Thirdly, you need to make sure that you do not carry the weight of lead during operation. Never take performance measurements while debugging. It's amazing how many people do it. If you are in the debugger, then the runtime can talk back and forth with the debugger to make sure you get the necessary debugging experience, and this chatter takes time. Jitter creates worse code than usual, so your debugging experience is more consistent. The garbage collector collects less aggressively. And so on. Always take performance measurements outside the debugger and with optimizations enabled.
Fourth, remember that virtual memory systems incur costs similar to those that experience fluctuations. If you have already run a managed program or recently started it, then the CLR pages that you need are most likely "hot" - already in RAM - where they are fast. If not, then the pages may be cold on disk and should be damaged by the page. This can significantly change the timings.
Fifth, remember that jitter can make optimizations you don't expect. If you try the time:
// Let time addition! for (int i = 0; i < 1000000; ++i) { int j = i + 1; }
jitter is fully included in its rights to delete the entire cycle. He can understand that the loop does not calculate the value that is used elsewhere in the program, and completely removes it, giving it time equal to zero. This is true? May be. Probably no. It's up to a shiver. You must measure the performance of realistic code where the actual calculated values โโare actually used; jitter then finds out that he cannot optimize them.
Sixth, test timers that create a lot of garbage can be reset by the garbage collector. Suppose you have two tests, one of which contains a lot of garbage, and the other a little. The cost of collecting garbage produced as a result of the first test can be โchargedโ with the time taken to complete the second test if, if successful, the first test can be run without collection, but the second test starts it. If your tests produce a lot of garbage, then consider (1), where do I start my test? It makes no sense to measure the performance of an unrealistic program, because you cannot draw good conclusions about how your real program will behave. And (2) should garbage collection fee be charged for the test that generated the garbage? If this is the case, then make sure that you forcefully compile the complete collection before testing time is spent.
Seventh, you run your code in a multi-threaded multiprocessor environment where the threads can be switched as desired and where the quantum of the thread (the time during which the operating system will give another thread until you have a chance to start again) is about 16 milliseconds . 16 milliseconds - about fifty million processor cycles. Performing accurate timings of submillisecond operations can be quite difficult if the thread switch occurs in one of several million processor cycles that you are trying to measure. Keep this in mind.