What is the best way to compare 2 options of SQL query for performance?

I have a SQL 2005 database running under a virtual environment.

To simplify, let's say I have two SQL SELECT queries. They both do the same. But I'm trying to analyze them to achieve goals.

As a rule, I start a local database, load some data and use synchronization to compare one option with other options.

But in this case, since the database is large and it is a test block, the client placed it on a host serving another virtual machine.

The database is too large to pull locally to exit (at least for now).

But my main problem is that when I run requests to the server, timing is everywhere. I can run + exact + the same query 4 times and get timings of 7 seconds, 8 minutes, 3: 45 minutes and 15 minutes.

My first thought was to use SET STATISTICS IO ON.

But this gives mainly statistics on reading and writing in tables that are queries, which, depending on variations in queries (temporary tables, vs views, vs join, etc.), cannot really be accurately compared, except in aggregate.

Then I'm from SET STATISTICS TIME ON, and just use CPU time, but that seems to throw off all IO, which also does not make a good baseline.

My question is: is there any other method of analyzing statistics or performance that can be useful in such a situation?

+5
1

STATISTICS IO - . , , , .

. Query → Display Estimated Execution Plan, SQL Server . Query → Include Actual Execution Plan .

SET SHOWPLAN_TEXT, SET SHOWPLAN_ALL SET SHOWPLAN_XML, .

. - , .

+1

All Articles