Design benchmarking library performance tests

I am ready to make a series of comparative comparisons of various shelf products.

What do I need to do to show the validity of the tests? How do I design my control tests to be respectable?

I am also interested in any suggestions on the actual design of the tests. Ways to download data without conducting tests (Heisenberg uncertainty principle) or control methods ... etc.

+6
performance c # testing
source share
3 answers

Itโ€™s a little difficult to answer without knowing which products youโ€™re trying to evaluate from the shelf. You are looking for UI responsiveness, bandwidth (e.g. email, transactions / sec), startup time, etc. - they all have different criteria for what measures should be monitored, and various tools for testing or evaluation. But to answer some of your general questions:

  • Trust is important. Try to make sure that everything you measure does not work much to deviate. Use the technique of performing multiple runs of the same scenario, get rid of outliers (i.e. your lowest and highest) and estimate the average values โ€‹โ€‹of avg / max / min / median. If you are doing some kind of bandwidth test, consider making it long, so you have a good set of samples. For example, if you look at something like Microsoft Exchange and thus use your counters, try to make sure that you take frequent samples (once per second or every few seconds) and run a test run for 20 minutes or so. Again, discard the first few minutes and the last few minutes to eliminate start / stop noise.

  • Heisenburg is tricky. On most modern systems, depending on which applications / measures you are measuring, you can minimize this impact by being smart about what / how you are measuring. Sometimes (for example, in the Exchange example) you will see an effect of about 0. Try to use the least invasive tools possible. For example, if you measure startup time, consider using xperfinfo and use the events built into the kernel. If you use perfmon, do not flood the system with extraneous counters that you do not need. If you are doing some existentially lengthy test, store it at the sampling interval.

Also try to eliminate any sources of environmental variability or possible sources of noise. If you are doing something network, think about network isolation. Try disabling any services or applications that you do not need. Limit any disk I / O, memory intensive operations, etc. If an IO drive can introduce noise into what is connected to the processor, consider using an SSD.

When designing tests, consider repeatability. If you perform any type of micro-object testing (for example, perf unit test), then your infrastructure support works with the same operation n times in exactly the same way. If you control the user interface, try not to physically control the mouse and instead use the basic level of accessibility (MSAA, UIAutomation, etc.) to directly control the controls.

Again, this is just general advice. If you have more specific features, I can try to track more relevant recommendations.

Enjoy it!

+3
source share

Your question is very interesting, but a little vague, because, not knowing what to test, it is not easy to give you some tips.

You can test the performance from different points of view, then, depending on the use or purpose of the library, you should try this or that approach; I will try to list some of the things that you may need to consider to measure:

  • Multithreading: if the library uses this or your software will use the library in a multi-threaded context, you may have to test it with many different processors and multi-processor configurations to see how it reacts.
  • Launch time: its importance depends on how intensively you use the library and what the nature of the created product is with it (client, server ...).
  • Response time: for this, do not accept the first execution, try to make the same call many times after the first and make an average. Using System.Diagnostics.StopWatch can be very useful for this.
  • Memory consumption: analyze growth, beware of exponential;). Go a step further and measure the number of objects created and placed.
  • Responsiveness: you should not only measure performance, how the user feels the speed of the product, it is very important too.
  • Network: if the library uses resources on the network, you may have to test it using different bandwidth and latency configurations, there is software to simulate these situations.
  • Data: try to create many different tests of data packets, trying to cover, for example: a large set of raw data, then a large set made of many smaller pieces, a long iteration with small pieces of data, ...

Tools:

  • System.Diagnostics.Stopwatch : required to call the comparison method.
  • Performance counters : when available, they are very useful to know what is going on inside, letting you track software without affecting its performance.
  • Profilers: There are good sources of memory and performance on the market, but as you said, they always influence measurements. They are good for finding bottlenecks in your software, but I donโ€™t think you can use them for benchmarking.
+1
source share

Why do you care about performance? In both cases, the time taken to record a message, wherever you store your journal, would be much slower than anything else.

If you are really engaged in this mapping protocol, you probably need to index your log files so that you can find the desired log entry, at this point you are not doing standard logging.

0
source share

All Articles