Continuous monitoring of the performance of .NET applications in production?

Given the relatively typical .NET 4 system in an SOA environment (such as Windows Server 2008 R2, RESTful Web Services on IIS 7, NServiceBus Windows Messaging Services, SQL Server 2008 R2, etc.), what are the best practices or de facto solutions (without branded price) for performing 24x7 performance control in production?

It is not necessary how much CPU / Memory / Disk IO it consumes, but rather, for example, how many calls to createAccount () per minute were completed, what is the average generation time of the Response () method, it receives and detects unusual delta peaks between, for example, generateResponseStarted and generateResponseComplete ( the method has been called (which, in turn, may cause a third party), and the response is ready to return accordingly).

After some search queries, it seems that the parameters are intended for low-level profilers (for example, dotTrace) and implement performance counters and consume them using PerfMon or another product such as OpManager.

What would you recommend? Will the implementation of performance counters for a real application significantly degrade performance on a production system? If not, are there any good libraries that simplify implementation in .NET? If so, how do people control the performance of their applications other than memory-disk-processor?


@Ryan Hayes

Thank you, I'm looking for a way to see unusual slowdowns or surges on production systems. For example, everything was fine during stress testing, but for some reason the third party we rely on has some problems or the DB slows down due to thread blocking, or the SAN gives way, or any other unexpected scenarios. Low-level profiling is too much of the overhead when turning counters only when there is a problem, too late at this point. Plus, we will skip historical data to compare it with (I need some kind of warning system when the delta is outside the acceptable threshold). I wonder how people control the performance of their production systems in their own experience, what would be the best monitoring approach that is not related to memory / processor / server.

+7
performance monitoring production continuous
source share
3 answers

You can try AlertGrid . It seems like this might be the solution to your problems.

You can send various parameters to AlertGrid from your application (new account name, runtime of some important part of the logic, etc.). The AlertGrid service can do several things with your data. First of all, it can process some notification rules created with the parameters you sent (for example, if time does something important> X seconds β†’ send SMS to the responsible person).

In two weeks, AlertGrid will have many new features. It seems that the most important thing for you will be the ability to display the parameters received from your system.

Please note that AlertGrid cannot detect parameters from your systems - you need to send them instead. This may look like additional work, but we believe that it is comparable with the time required to install and configure some specialized tools. On the other hand, thanks to this approach, AlertGrid overcomes some limitations (it can be integrated with everything that can send HTTP requests).

I believe that when creating an account in AlertGrid and its interactive tutorial it will be much easier to understand.

As you may have noticed, I am a developer in the AlertGrid team :

Disclaimer: at the time of writing, we know that prices for AlertGrid will decrease in the near future, so don’t look at them right now, you can contact our support team for more information on pricing. A free account is available and should be sufficient to get started.

+2
source share

The question is, what are you trying to learn from performance monitoring?

  • Do you want to make your code faster? Then I would suggest using profiling tools in a test environment to find out where you can improve your code.

  • Do you want to know about the maximum beating of your system? . Then I suggest performing load testing in a test environment. If you know exactly how hard you can push your system without destroying it, you will not need to introduce monitoring into production.

For production, you probably want to maximize productivity. To do this, it is usually difficult to crowd out the test environment and get solid metrics so you don't have to install performance monitors in place. For production, you just want to know when you hit that peak, and then degrade gracefully or on your own. Generally, good logging is the best way to monitor system performance (other than hardware) and record exceptional performance features.

Each system is different, and your mileage may vary. Take this as a suggestion, not the way ALL does it, because there are always exceptional cases where you may have to do profiling in the workplace.

0
source share

We use Nagios for local monitoring (CPU, disk space, etc.) and AlertFox for monitoring web transactions ("appearance"). Of course, a later version makes sense only if your site (?) Is publicly available.

Will the implementation of performance counters for a real application significantly degrade performance in a production system?

We have the Nagios Win server plugins in place and do not see any problems with them.

0
source share

All Articles