How to correctly check the performance of SELECT queries with Oracle?

I would like to test two queries to find out their performance, as shown in a simple execution plan. I saw Tom Kit doing all this time on his website to gather evidence of his theories.

I believe that there are a lot of errors in performance testing, for example, when I first run a query in the SQL developer, this query may return a certain amount. Execution of the same request returns instantly again. There should be some kind of caching on the server or client, and I understand that this is important, but I'm not interested in cached performance.

What are the recommendations for performance testing? And how do I write a performance test that repeats the query? Am I just writing an anonymous block and loop? How to get time information, averages, medians, std deviations?

+4
source share
4 answers

Cache requests from Oracle (and other databases) where you see the behavior you are describing. Hard analysis means that there is no query plan for the query, which leaves Oracle to determine the query plan based on indexes and statistics. Soft parsing is what happens when you run an identical query afterwards and get an instant result because the query plan exists and Oracle reuses it. See the Ask question about this for more details .

Remember the EXPLAIN output :

When using a cost-based optimizer, execution plans can and can change as core costs change. The EXPLAIN PLAN output shows how Oracle executes an SQL statement when an explanation has been explained. This may differ from the actual run time plan for the SQL statement due to differences in runtime and plan explanation.

Focusing on performance without caching gives the worst case scenario, but assuming caching happens, non-cached tests are not realistic in everyday use.

+2
source

To build the OMG Ponies answer, a time-based customization is something possible, but not realistic. You will need to start by caching with a full cache in each case or with a completely empty buffer cache, and none of them will represent reality, especially if there is no competing load.

When I tune in, it is usually against a live system with activity, and I focus on setting logical I / O operations either using the extended SQL trace ( dbms_monitor.session_trace_enable / dbms_monitor.session_trace_disable ), either using the tkprof utility or using SQL * Plus and set autotrace traceonly - which does all the query work, but produces an output, because I'm usually not interested in watching a jillion line scroll.

The exact mechanism usually involves related SQL, using something like the following:

  variable :my_bind1 number; variable :my_bind2 varchar2(30); begin :my_bind1 := 42; :my_bind2 := 'some meaningful string'; end; / set timing on; set autotrace traceonly; [godawful query with binds] set autotrace off; 

As part of the results, I am looking for a plan that I would expect, a comparative value for sorts - provided that it exists - and, most importantly, the number of consecutive I / O operations. How many Oracle blocks had to be read in consistent mode to satisfy the request. I cannot find the source of the quote, but I think it is Cary Milsap from method R.

"Configure logical I / O, and your physical I / O will follow."

+2
source

In performance tuning, if the only piece of data you are looking at is a wall clock, you will get only a small fraction of the entire image. You need to at least look at the execution plan as well as the IO statistics to determine how best to configure the request.

In addition, you need to eliminate other causes of performance problems โ€” for example, if many queries have a common performance problem, it might not be just one of them โ€” it could be an architecture problem or significant concurrent activity in the database, or even a problem with equipment.

I had similar problems with what you described earlier; for example, a certain type of request, which should be very fast, took 30 seconds to work for the first time, and then dropped to a second or two. However, as soon as I looked at the execution plan, it was obvious that he was using a full table scan because he could not use the unique index that was created. When the query was first launched, most of the data was loaded into the cache (in fact, two cache levels were involved - the database buffer cache and the storage level cache on disks), so the subsequent full scanned tables were very fast.

+1
source

What is right? With 11g there are several additional complications that need to be considered. The pre peeking optimizer has become much smarter, and the stability of the sql plan has a BIG impact. These two functions do automatic tuning of the database, but can also have unexpected effects during performance tests, for example, because not all variants of the plans are known and accepted at the beginning of the tests. This may be the reason that the second test run the day after the first run unexpectedly starts much faster, without any visible changes. Since 11g performance testing is less important than writing logically correct code. For example, the Cartesian product and filtering of a single van value are functionally correct, but in most cases the code is incorrect because it extracts more data than is logically necessary. If the queries retrieve the necessary data and are in the correct control structure, database processing processes configure the code during maintenance windows. In many cases, the differences between test environment and production are such that comparisons cannot be safely made. Do not get me wrong, testing is important, but basically for logic, compared to performance testing up to 11g, there are additional steps that need to be taken. For a pleasant reading, see Oracleยฎ Database 2 Day + 11g Release 2 Performance Tuning Guide (11.2)

+1
source

All Articles