Reduce SQL Tracing Overhead with Filters

We have a SQL 2000 server, which has a variety of jobs that work at different times of the day or even on different days of the month. Usually we use the SQL profiler to run traces for very short periods of time to troubleshoot performance issues, but in this case it really will not give me a good overall picture of the types of queries that are performed against the database by the course of the day, week or month.

How can I minimize the overhead of long-term SQL trace performance? I already know:

  • Perform server-side tracing (sp_ create_trace) instead of using the SQL Profiler user interface.
  • Tracking the file, not the database table (which will add additional service data to the database server).

My question is really about filters. If I add a filter only to journal requests that run longer than a certain duration or read, you still need to check all activity on the server to decide whether to register it, right? Thus, even with this filter, does tracing create an unacceptable level of overhead for a server that is already on the verge of unacceptable performance?

+6
sql-server sql-server-2000 sqlprofiler
source share
4 answers

I found an article that actually measures the effect of SQL profiler session performance on server-side tracing:

http://sqlblog.com/blogs/linchi_shea/archive/2007/08/01/trace-profiler-test.aspx

Actually, this was my main question, how to make sure that I did not confuse my production server during the trace. It seems that if you do it right, there is minimal overhead.

+2
source share

Adding filters minimizes the overhead of collecting events, and also prevents the server from registering transaction records that you do not need.

As for whether the track will create an unacceptable level of overhead, you just need to test it and stop it if there are additional complaints. Taking the advice of a database tuning adviser with this trace file can improve performance for everyone tomorrow.

+2
source share

In fact, you don’t need the server to process the trace, as this can cause problems: "When the server processes the trace, no events are discarded, even if it means the loss of the performace server to capture all events. If Profiler processes the trace, it skips events if the server is getting too busy. " (Of the best practice examples of exam 2000 7031).

+2
source share

In fact, it’s possible to collect more detailed measurements than you can get from Profiler and do it 24x7 across the entire instance - without any overhead. This avoids the need to figure out in advance what you need to filter ... which can be difficult.

Full disclosure: I work for one of the providers that provide such tools ... but whether you use our or someone elses ... this can help you solve the main problem.

Read more about our tool here http://bit.ly/aZKerz

0
source share

All Articles