Simply put, "Directed Translation Syntax" means controlling the entire compilation (translation) process using a syntax recognizer (parser).
Conceptually, the process of compiling a program (translating it from source code to machine code) begins with an analyzer that creates a parsing tree and then converts this parsing tree through a sequence of transformations of a tree or graph, each of which is largely independent, which leads to the final A simplified tree or graphic that moves to receive machine code.
This view, although good in theory, has the disadvantage that if you try to implement it directly, you need enough memory to store at least two copies of the entire tree or graph. Back when the Dragon Book was written (and when a lot of this theory was hashed), computer memories were measured in kilobytes, and 64 KB was a lot. Therefore, compiling large programs can be complicated.
Using Syntax Directed Translation, you organize all the transformations of the graph around the order in which the parser recognizes the parse tree. Instead of creating a complete parsing tree, your parser builds it a bit and then passes these bits to subsequent passes of the compiler, eventually creating a small piece of machine code before continuing with the parsing process to build the next parsing tree . Since at any time only small amounts of the parsing tree (or subsequent graphs) exist, much less memory is required. Since the syntax recognizer is the main sequencer that controls all of this (deciding the order in which events occur), this is called the syntax translation mode.
Since this is such an effective way to preserve memory usage, people even redesigned languages to make it simpler - the ideal creature is to have a “Single Pass” compiler that could actually do the whole process from parsing to generating machine code in one go.
Currently, memory does not have such a bonus, so there is less pressure to force everything in one pass. Instead, you usually use Syntax Direct Translation only for the front-end, parsing syntax, performing type checking and other semantic checks and a few simple conversions from the analyzer and creating some internal form (three address codes, trees or incorrect codes), and then have separate optimizations and backwards that are independent (and therefore not syntax oriented). Even so, you can argue that these later passes are at least partially syntax oriented, as the compiler can be arranged to work on large parts of the input (such as entire functions or modules) by pushing all the passes before continuing next piece of input.
Tools such as yacc are designed around the idea of Directed Translation syntax — the tool generates a syntax recognizer that directly launches code fragments (“actions” in the tool language), since derivatives (fragments of the parsing tree) are recognized without ever creating the actual “tree” . These actions can directly refer to what is logically later passed to the compiler, and then returned to continue parsing. The required main loop that governs all this is the state machine for reading the parser token.