SQL Profiler and Tuning Advisor

We have problems with database performance, and I have a lot of experience with the .NET Profiler and always do analysis in the application, but like many developers, I now wait until it is very late (when the problem) to start the analysis and try to collect data on how to fix the problem.

Perhaps this will not be one response post, but only the message "HELP I AM A DB IDIOT" and search for any recommendations for guidance, recommendations and experience in finding problems.

As for the installation we are using SQL 2005, I have very limited production access to run the SQL Server Engine Tuning Advisor and SQL Profiler through the portal interface, I can copy and paste, but more about that. One of the key points that I would like to do is to get a snapshot of production requests and calls so that I can upload them to the configuration engine in a lower environment, where I can try to nail the database to get recommendations from the Tuning Engine adviser.

+7
performance sql
source share
4 answers

This script can be used to determine if you have selected the correct indexes. You need to see how often the index is used for search and compares it with how often the index is updated. Search performance depends on update performance. And even worse, when the index is updated frequently, you cause the index to fragment, and the statistics are outdated.

You should also compare range_scan_count with singleton_lookup_count. Range scanning is preferable to a single search. A one-way search may be the reason for the index search and key search operation. That is, for each row found in the index search, sql will look for a datapage in the cluster index, and this is normal if you can say a couple of thousand, but not for millions of rows.

CREATE PROCEDURE [ADMIN].[spIndexCostBenefit] @dbname [nvarchar](75) WITH EXECUTE AS CALLER AS --set @dbname='Chess' declare @dbid nvarchar(5) declare @sql nvarchar(2000) select @dbid = convert(nvarchar(5),db_id(@dbname)) set @sql=N'select ''object'' = object_name(iu.object_id, iu.database_id) , i.name ,''user reads'' = iu.user_seeks + iu.user_scans + iu.user_lookups ,''system reads'' = iu.system_seeks + iu.system_scans + iu.system_lookups ,''user writes'' = iu.user_updates ,''system writes'' = iu.system_updates from '+ @dbname + '.sys.dm_db_index_usage_stats iu ,' + @dbname + '.sys.indexes i where iu.database_id = ' + @dbid + ' and iu.index_id=i.index_id and iu.object_id=i.object_id and (iu.user_seeks + iu.user_scans + iu.user_lookups)<iu.user_updates order by ''user reads'' desc' exec sp_executesql @sql set @sql=N'SELECT ''object'' = object_name(o.object_id, o.database_id), o.index_id, ''usage_reads'' = user_seeks + user_scans + user_lookups, ''operational_reads'' = range_scan_count + singleton_lookup_count, range_scan_count, singleton_lookup_count, ''usage writes'' = user_updates, ''operational_leaf_writes'' = leaf_insert_count + leaf_update_count + leaf_delete_count, leaf_insert_count, leaf_update_count, leaf_delete_count, ''operational_leaf_page_splits'' = leaf_allocation_count, ''operational_nonleaf_writes'' = nonleaf_insert_count + nonleaf_update_count + nonleaf_delete_count, ''operational_nonleaf_page_splits'' = nonleaf_allocation_count FROM ' + @dbname + '.sys.dm_db_index_operational_stats(' + @dbid + ', NULL, NULL, NULL) o, ' + @dbname + '.sys.dm_db_index_usage_stats u WHERE u.object_id = o.object_id AND u.index_id = o.index_id ORDER BY operational_reads DESC, operational_leaf_writes, operational_nonleaf_writes' exec sp_executesql @sql GO 
+7
source share

Here are some links that should help you get started in search of productivity.

What resources exist for tuning database performance?

Featured Performance Tuning Tricks

What common methods can be used to optimize SQL queries?

The best way to improve performance (and enable crash recovery)

+5
source share

If you can use the profiler to store events in a table, then it makes no sense to use the Database Advisor (DTA) to optimize the database from the log table, but personally I don’t use DTA at all. It takes a lot of time to use DTA and I want more control what happens.

If you can convince the server owner to create a new database called "SQLToolkit" and grant you the rights to execute the procedures, then I have several procedures to help you choose the right indexes.

  CREATE PROCEDURE [ADMIN].[spMissingIndexes] AS SELECT mid.statement, mid.equality_columns, mid.inequality_columns, mid.included_columns, migs.user_seeks, migs.user_scans, migs.last_user_seek, migs.avg_user_impact, user_scans, avg_total_user_cost, avg_total_user_cost * avg_user_impact * (user_seeks + user_scans) AS [weight]--, migs.*--, mid.* FROM sys.dm_db_missing_index_group_stats AS migs INNER JOIN sys.dm_db_missing_index_groups AS mig ON (migs.group_handle = mig.index_group_handle) INNER JOIN sys.dm_db_missing_index_details AS mid ON (mig.index_handle = mid.index_handle) ORDER BY avg_total_user_cost * avg_user_impact * (user_seeks + user_scans) DESC ; GO 
+5
source share

Upon request, I send another useful script to determine how often and for how long a particular index is blocked due to the locking mechanism in SQL:

 CREATE PROCEDURE [ADMIN].[spIndexContention] @dbname sysname WITH EXECUTE AS CALLER AS declare @dbid int select @dbid = DB_ID(@dbname) declare @sql nvarchar(1000) SET @sql = N'SELECT dbname=DB_NAME(database_id), tablename=object_name(s.object_id, s.database_id) , indexname=i.name, i.index_id , row_lock_count, row_lock_wait_count , [block %]=cast (100.0 * row_lock_wait_count / (1 + row_lock_count) as numeric(15,2)) , row_lock_wait_in_ms , [avg row lock waits in ms]=cast (1.0 * row_lock_wait_in_ms / (1 + row_lock_wait_count) as numeric(15,2)) FROM sys.dm_db_index_operational_stats (' + convert(nvarchar(5),@dbid) + ', NULL, NULL, NULL) s INNER JOIN ' + @dbname + N'.sys.indexes i ON i.object_id = s.object_id AND i.index_id = s.index_id ORDER BY row_lock_wait_count desc' print @sql exec sp_executesql @sql GO 
+4
source share

All Articles