In modern SQL environments, this is a phased approach, when you make a decision at a certain level of the workflow, whether you want to reuse the existing compiled block or start all the steps again if you get a better plan for a certain combination of arguments.
I think this is a payoff between (re) compiling and the runtime (then compiled for executable code) result. Depending on the complexity of the request, recompiling using the specifics of these arguments at run time may not be worth the effort if the execution time of the existing code is already too low due to predictable minimum resource consumption (for example, read two lines and return).
With higher query complexity and estimated resource consumption (many huge tables, decisive index choices, possible table scans), the granularity of your statistics comes into play. those. if you have selectivity, emissions, range selectivity, media. field sizes, physical map sizes, etc. the optimizer may come to completely different conclusions with different sets of arguments.
Calculating the best plan for approving a 25-join with the arguments of a 10 ++ variable can take its time and resources. If the result is faster and more effective than the version for everyone, it's worth the effort. Especially this given set of arguments may contain game changes, and the request will be re-executed frequently.
Finally, your mileage may vary for each vendor;)
Gerd nachtsheim
source share