How do you check code performance between software versions?

I am developing software in C # /. NET, but I think that a question can be asked for other programming languages. How do you check the performance of your software between release versions? Let me dwell in more detail.

Before releasing the software, I would like to compare the performance of the software with the set of features that were available in the previous version of the software. Suppose you are talking about a software library project (without a GUI) that leads to the release of one or more DLLs. How can this be achieved? What are some of the best practices? I can not change the current dll with the dll of the previous version and run the same tests.

One way I can think of is to add the same performance tests to the main branch (which is used for the current version) and an earlier release branch, and then compare the performance. I think there is some pain in this, but it is possible.

Another way I can think of is to start with the release release branch, drown out the new codes and functions that were inserted after the latest version, and then run the test. I do not think that this will give the correct result, not to mention the fact that this approach is even more painful than the previous approach.

Thanks for the other ideas. Prefer specific C # / answers. NET

Edit 1: This and this are a couple of related issues.

+8
performance c #
source share
4 answers

We have a set of performance tests. These are just NUnit tests. Each test sets some objects, starts a timer (the stopwatch works well), whether we perform an operation (for example, downloads data for a specific screen), and then writes the elapsed time to a CSV file. (NUnit logs how long each test takes, but we want to exclude the installation logic, which in some cases will differ from test to test, so our own timers and logging make more sense.)

We run these tests from time to time, always in the same hardware and network environment. We import the results into a database. Then it's easy to create graphs showing trends, or causing large percentage changes.

+3
source share

If you want to really compare performance between releases, you will need a test that performs the same functions in different versions. Unit tests are often well suited for this.

Another more active thing you can do is log your code tool based on predefined performance thresholds. For example, when you run the code in the old version, you get a metric. Then add the time code to the application so that if the same function takes a certain amount longer at any time, register a message (or send an event that the caller can order in the log). Of course, you do not want to overdo it, because the synchronization code can lead to the very worst performance.

We do this in our SQL call applications. We have a threshold value for the maximum time that any sql call should take, and if there is an SQL call above the threshold, we register it as a warning. We also track the number of sql calls in a given HTTP request using the same thing. Your goal should be to lower thresholds over time.

You can wrap these tests in #if sections so that they are not included in production, but it can also be very useful to have them in production.

+2
source share

You can add test results to a text file with a controlled source for each new version. This will give you an easily accessible performance history of each version.

Your idea of ​​running a performance test on the branch and trunk is essentially the same, but saving the results will probably save you time switching your working copy back and forth.

+1
source share

We have a special setting that the user (or tester) can activate. We turn it on, it generates a CSV file that we can submit to Excel, and we see a performance report.

He will report on individual accounts of certain operations and how much time they take. Excel shows this for us in a pleasant visual way.

All code is common, only the disadvantage is the overhead of tracking performance, but we compared it with almost nothing. It is well optimized and very short.

The beauty of this approach also allows you to get good feedback from the client if they experience performance problems that you cannot reproduce.

0
source share

Source: https://habr.com/ru/post/650393/


All Articles