Subprogramme performance indicators: any best practices?

I would like to collect metrics for specific routines of my code to see where I can best optimize. Take a simple example and say that I have a "Class" database with several "students". Let them say that the current code calls the database for each student, and does not immediately capture them in a package. I would like to see how long each trip to the database takes for each row of the student.

This is in C #, but I think it applies everywhere. Usually, when I get curious about a certain routine performance, I will create a DateTime object before starting, run a subroutine, and then create another DateTime object after the call and take the difference in milliseconds between them to see how long it takes. Usually I just I output this to the page trace ... so this is the lo-fi bit. Any best practices for this? I thought that I could put the web application in some kind of “diagnostic” mode and make detailed logs / event logs, regardless of what I needed, but I wanted to see if the idea of ​​catching corn stackoverflow had a better idea.

+6
performance optimization
source share
8 answers

The approach you take several times will give you a better look at the performance of your application. One thing I can recommend is to use System.Diagnostics.Stopwatch instead of DateTime, DateTime is accurate to only 16 ms, where the stopwatch is accurate to the mark of the processor.

But I recommend supplementing it with custom performance counters to create and run the application under the profiler during development.

+1
source share

For database queries, you have two small problems. Cache: data cache and command cache.

If you run the query once, the statement is parsed, prepared, bound, and executed. Data is extracted from files to the cache.

When you run the query a second time, the cache is used, and performance is often much better.

What is the "real" performance number? First or second? Some people say that the “worst case” is a real number, and we need to optimize it. Others say "typical case" and run the request twice, ignoring the first. Others say "average" and work 30 times, averaging them all. Others say "typical average", work 31 times and average 30 last.

I suggest that "the last 30 of 31" is the most significant number of database performance. Do not sweat that you cannot control (analysis, preparation, binding). Sweat material that you can control - data structures, I / O loading, indexes, etc.

+3
source share

I sometimes use this method and find it pretty accurate. The problem is that in large applications with a fairly impressive amount of debug logs, it can be painful to look for this information in the logs. Therefore, I use external tools (I mainly program in Java and use JProbe) that allow me to see the average and total time for my methods, how much time is spent exclusively on a particular method (as opposed to the cumulative time spent on a method and any method, which it calls), as well as the allocation of memory and resources.

These tools can help you measure the performance of all applications, and if you are doing a significant amount of development in an area where productivity is important, you can study the available tools and find out how to use them.

+2
source share

There are several Profiler , but to be honest, I think your approach is better. The profiler approach is redundant. Perhaps using profilers is worth it if you absolutely don't know where the bottleneck is. I would rather spend some time analyzing the problem and delivering some strategic statements about printing than figuring out how to configure the application for profiling, and then uploading giant reports where each executable line of code is synchronized.

+1
source share

If you work with .NET, I would recommend checking out the Stopwatch class. The time you get from this will be much more accurate than the equivalent sample using DateTime.

I also recommend checking ANTS Profiler for scenarios in which performance is critical.

+1
source share

It is worth considering investing in a good commercial profiler, especially if you ever want to do it a second time.

The one I use, JProfiler , works in the Java world and can connect to an already running application, so no special hardware is required (at least with later JVMs).

It very quickly creates a sorted list of hot spots in your code, showing which methods your code spends most of its time inside. It filters fairly reasonably by default and allows you to further customize filtering if necessary, which means that you can ignore the details of third-party libraries by choosing those from your methods that take all the time.

In addition, you get many other useful reports on what your code does. He paid for the cost of the license at the time I saved the first time I used it; I did not need to add many logging operators and create a mechanism for outputting output: the profiler developers have already done all this for me.

I am not connected with ej-technology in any way, except as a very happy client.

+1
source share

I use this method and find it very accurate.

0
source share

I think you have a good approach. I recommend that you create “machine friendly” entries in the log files so that you can more easily analyze them. Something like CSV or other split records that are sequentially structured.

0
source share

All Articles