I am trying to debug a rather complicated stored procedure, which is combined into many tables (10-11). I see that for a part of the tree, the estimated number of rows is very different from the actual number of rows - in the worst case, the SQL server estimates that 1 row will be returned when 55,000 rows are actually returned!
I am trying to understand why this is so - all my statistics are updated, and I updated the statistics using FULLSCAN on several tables. I do not use any user-defined functions or table variables. As far as I can see, the SQL server should be able to accurately estimate how many rows will be returned, but it continues to choose a plan in which it executes tens of thousands of RDI queries (when it is expected that it will execute only 1 or 2).
What can I do to try to understand why the counted number of lines came out so many?
UPDATE: . Thus, looking at the plan, I found one node, in particular, which seems suspicious - it scanned a table in a table using the following history:
status <> 5
AND [type] = 1
OR [type] = 2
This predicate returns the entire table (630 rows - the table scan itself is NOT a source of poor performance), however the SQL server has an estimated number of rows of only 37. Then the SQL server performs several nested loops using this to search for RDI, scan indexes and search indexes. Could this be the source of my mass miscalculation? How to get it to estimate a more reasonable number of rows?
source
share