2.5MB includes all data on 321 pages, including other rows on the same pages as those received for your request, as well as index pages obtained to search for your data. Note that these are logical reads, not physical reads, for example. reading from a cached page will make reading "cheaper" - during optimization, you can also get an indicator of the cost of the processor and profiler.
wrt How to determine the optimal "target" for reading.
FWIW I compare the actual data with the optimal value, which I can present as the minimum number of pages needed to return data to your request in an "ideal" world.
eg. if you calculate about 5 rows per page from table x, and your query returns 20 rows, the “ideal” number of reads will be 4, as well as some overhead for navigation indexes (assuming, of course, that the rows are grouped “perfectly” for your query) so utopia will be approximately 5-10 pages.
For a critical performance request, you can use the actual vs vs utopian read for micro-optimization, for example:
- Is it possible to place more rows on a page in a cluster (table), for example. replacing unexplored strings with varchar () not char or using varchar rather than nvarchar () or using smaller integer types, etc.
- Is it possible to change the clustered index so that it is necessary to extract fewer pages (for example, if 20 rows for the above query were scattered across different pages, then the reading will be> 4)
- Otherwise (since you can use only one CI), can covering indexes replace the need to switch to tabular data (cluster) in general, since covering indexes matching your query will have a higher density of "rows"
- And for indexes, density improvements, such as fillfactors or narrower indexing for indexes, may mean fewer indexes to read
You may find this article helpful.
NTN!
Stuartlc
source share