Why is Oracle ignoring the “perfect” index?

I have this table:

create table demo ( key number(10) not null, type varchar2(3) not null, state varchar2(16) not null, ... lots more columns ... ) 

and this index:

 create index demo_x04 on demo(key, type, state); 

When I run this request

 select * from demo where key = 1 and type = '003' and state = 'NEW' 

EXPLAIN PLAN shows that it performs a full table scan. So I reset the index and created it again. EXPLAIN PLAN is still scanning a full table scan. How can it be?

Some background: this is historical data, so what happens is that I look at the line with the CLEARED state and insert a new line with the NEW state (plus I copy a few values ​​from the old line). Then the old line is updated to USED . Thus, the table is always growing. I noticed that the power of the index is 0 (despite the fact that I have thousands of different values). After the recreation, the power increased, but CBO did not like the index better.

The next morning, Oracle suddenly fell in love with the index (perhaps sleeping over it) and began using it, but not for long. After some time, processing fell from 50 lines / s to 3 lines / s, and again I saw “FULL SCAN TABLES”. What's happening?

In my case, I need to process about a million lines. I make changes in batches of approx. 50. Is there some kind of command that I have to run after committing to update / reorganize the index, or something like that?

I am in Oracle 10g.

[EDIT] I have 969'491 different keys in this table, 3 types and 3 states.

+4
source share
5 answers

What happens if you provide an index hint? Try the following:

 SELECT /*+ INDEX (demo demo_x04) */ * FROM demo WHERE key = 1 AND type = '003' AND state = 'NEW'; 

It seems that what happened overnight is that the table was analyzed. Then, when you performed the table processing, the index value was updated enough to make the oracle table statistics fade again, and the optimizer stopped using the index.

Add a hint and see if EXPLAIN PLAN gives a different plan and the query works better.

Oh, and Tony's answer regarding analyzing the table is common good practice, although with 10g the database is very good at doing self-service in this regard. If your process makes a lot of updates, the index can quickly become obsolete. If starting an analysis, when your process begins in a ditch, improves the situation for a while, you should know that this is a problem.

To update statistics for a table, use the dmbs_stats.gather_table_stats package.

For instance:

exec dbms_stats.gather_table_stats ('owner', 'DEMO');

+4
source

Was the table recently analyzed? If Oracle thinks it is very small, it cannot even use the index.

Try the following:

 select last_analyzed, num_rows from user_tables where table_name = 'DEMO'; 

NUM_ROWS tells you how many rows Oracle thinks the table contains.

+3
source

"The next morning, Oracle suddenly fell in love with the index (perhaps slept over it)" DBMS_STATS probably works overnight.

Typically, I see one of three reasons for a FULL TABLE SCAN by index. First, the optimizer considers the table to be empty or at least very small. I suspect this was the initial problem. In this case, it is faster to perform a full scan of a table consisting of only a few blocks, rather than using an index.

Secondly, when the query is such that the index is practically not used.

 "select * from demo where key = 1 and type = '003' and state = 'NEW'" 

Do you use literals hardcoded in the request. If not, your variable data types may be incorrect (for example, a key character). This will require that the numeric key be converted to a character for comparison, which would make the index almost useless.

The third reason is that he believes that the query will process most of the rows in the table. Type and condition seem pretty low. Perhaps you have a large number of defined key values?

+3
source

Commentary on the processing you are describing: It looks like you are doing line processing with intermittent commits, and I urge you to rethink this if you can. The update / insert mechanism can be converted to a MERGE statement, and the entire data set can then be processed in a single statement with one commit at the end. It will almost certainly be faster and use less resources than your current method.

+1
source

Is the column key value always? If so, I'm not sure that consultations with the index will optimize the query, since each row should be checked in any case. If so, declare an index without a key column. You can also try:

 select key, type, state from demo where key = 1 and type = '003' and state = 'NEW' 

which (if my guess is correct), you still need to look at each row, but which can go to the index, since now all the columns in the result set are covered.

I just guess based on your statement that the index shows a power of 0.

0
source

All Articles