Looking at quantifying the overhead of NewRelic monitor performance in a python django application

I am working on a large Django application (v1.5.1) that includes several application servers, MySQL servers, etc. Before starting NewRelic on all servers, I want to have an idea of ​​what overhead I will incur for a transaction.

If possible, I would even like to distinguish between application tracking and server monitoring, which would be ideal.

Does anyone know of generally accepted numbers for this? Perhaps the site that is conducting such an investigation or steps so that we can conduct the investigation ourselves.

+8
python django newrelic
source share
1 answer

For a Python agent and monitoring a Django web application, the overhead for each request depends on how many functions are performed within the specific request that is being instrumentalized. This is because full profiling is not performed. Instead, only certain functions of interest are used. Therefore, only the overhead of performing a wrapper for this call to a single function, and not of nested calls, unless these nested functions were in turn processed.

The specific functions used in Django are the middleware function and view handler, as well as the rendering of the templates and the function inside the template renderer that deals with each block of the template. Unlike Django itself, you have tools for the low-level functions of the client database module for query execution, as well as memcache and external web, etc.

This means that if for a specific execution of a web request only went through 100 instrumental functions, then this is only the execution of those that incur additional overhead. If instead the view handler processed a large number of different database queries or you had a very complex processed template, the number of instrumental functions could be much larger, and thus the overhead for this web request would be more. However, if your view handler does more work, then it usually already has a longer response time than less complex one.

In other words, the overhead for each request is not fixed and depends on how much work is done, or, more specifically, on the number of tools involved. Therefore, it is impossible to quantify the situation and give you a fixed figure for each request.

All said that there will be some overhead, and the overall target range is about 5%.

What usually happens is that understanding from the availability of performance indicators means that for most customers there are usually quite simple improvements that can be found almost immediately. By making such changes, the response time can be quickly reduced to be lower than what you had before you started monitoring, so you end up ahead of what you should start with when you didn't have monitoring. With further digging and tuning, improvements can be even more dramatic. Pay attention to a certain aspect of performance indicators, and you can also better configure your WSGI server and, perhaps, better use it and reduce the number of hosts required and thus reduce hosting costs.

+8
source share

All Articles