Interpreting AST is usually much slower than running machine code that does the same. Typical is a factor of 20.
The advantage is that AST is faster to produce, so it takes less time to generate code than most compilers. AST interpreters are also simpler than compilers, because the entire phase of code generation can be ignored.
So, if you have a program that does not perform heavy calculations, it will work faster and faster with the help of an interpreter. On the other hand, if you have code that runs frequently or continuously in an environment where loops are limited, it is best to compile.
In some programming environments (for example, many lisps) there is an interpreter for developing code, since it supports fast debugging cycles and a compiler for creating fast code at the end of development. Some of these systems allow you to freely mix interpreted and compiled code, which is interesting in itself.
Compilation into bytecode is an intermediate environment: it is faster to compile than machine code, but faster to execute than AST. However, modern bytecode interpreters often compile their own code “just in time” as your program launches. This, for example, is the name source for the Sun HotSpot JVM. It compiles the "hot spots" in Java bytecode into native code to speed up programs at runtime.
Answer questions in the comments
The question arose about the multiplier 20 mentioned above. The support links for this issue are old, because few modern language systems use pure AST interpreters. (Shells are a notable exception, but most of them have been developed for a long time, and test speeds are not normal.) They are too slow. My context is lisp interpreters. I implemented a couple. Here, for example, is one Scheme test suite . Columns corresponding to AST interpreters are fairly easy to select. If there is demand, I can publish more and similar from the archive of the digital library ACM.
Another example: Perl uses the highly optimized AST interpreter. It takes about 7 seconds to add 10 million floats to a tight loop on my machine. Compiled C (gcc-O1) takes about 1/20 of a second.
As an example, the commentator introduced the addition of 4 variables. Analysis forgot the cost of the search. One clear dividing line between the interpreter and the compiler is pre-calculated addresses or frame offsets for characters. In the "pure" interpreter, they are not. Thus, adding 4 numbers requires 4 search queries in the runtime, usually a hash table - at least 100 instructions. In a good compiled code, adding 4 integers on x86 requires 2 instructions and one more to save the result.
There are many shades between “clean” AST meters and compiled machine code. Depending on the language, it may be possible to compile character offsets in the AST. This is sometimes called "quick links." Technique usually speeds up work two or more times. Then there are byte code compilation and transition systems such as Python, PHP, Perl, Ruby 1.9+. Their bytecode is effectively a stream code (opcodes can cause very complex things), so they are closer to AST than machine code. Then there are the JIT bytecode interpreters that I mentioned above.
The fact is that the coefficient of 20 pure interpreters of AST is one book, and machine code is another. In the middle there are many options, each of which has advantages and disadvantages.