Is the DBMS (MySQL, SQL Server ....) interpreted or compiled?

I mean, in terms of SQL queries, are they compiled or interpreted at a low level ?. How does it work internally, is it an SQL statement, interpreted or compiled ?.

+7
database mysql
source share
2 answers

It usually works as follows:

     SQL String --- [Optimizer] ---> Execution Plan --- [Execution] ---> Result

I personally like how the optimizer (query planner) is somehow very similar to the compiler. This conversion of an SQL statement to something more easily executed. However, this is not nativity executable on a chip. This "compilation" is quite expensive - just like compiling C ++ code. This is the part in which various options are evaluated; join order, which index to use, etc. It is good practice to avoid this when possible using the binding options .

Then, the execution plan for the database is executed. However, the strategy has already been fixed. execution just does it. This part interprets the execution plan, not SQL.

In the end, it is somehow similar to Java or .NET, where compilation turns the source code into binary form, which can be interpreted more easily. If we ignore the JIT for this argument, the execution of the Java program interprets this metacode.


I used this method to explain the benefits of using performance bindings (Oracle) in my free ebook, Use Index, Luke .

+11
source share

In modern SQL environments, this is a phased approach, when you make a decision at a certain level of the workflow, whether you want to reuse the existing compiled block or start all the steps again if you get a better plan for a certain combination of arguments.

I think this is a payoff between (re) compiling and the runtime (then compiled for executable code) result. Depending on the complexity of the request, recompiling using the specifics of these arguments at run time may not be worth the effort if the execution time of the existing code is already too low due to predictable minimum resource consumption (for example, read two lines and return).

With higher query complexity and estimated resource consumption (many huge tables, decisive index choices, possible table scans), the granularity of your statistics comes into play. those. if you have selectivity, emissions, range selectivity, media. field sizes, physical map sizes, etc. the optimizer may come to completely different conclusions with different sets of arguments.

Calculating the best plan for approving a 25-join with the arguments of a 10 ++ variable can take its time and resources. If the result is faster and more effective than the version for everyone, it's worth the effort. Especially this given set of arguments may contain game changes, and the request will be re-executed frequently.

Finally, your mileage may vary for each vendor;)

0
source share

All Articles