A lexer is used to split the input into tokens, while a parser is used to build an abstract syntax tree from this sequence of tokens.
Now you can simply say that tokens are just symbols and use the parser directly, but it is often convenient to have a parser that only needs to look at one token to determine what it will do next. Therefore, a lexer is usually used to divide input into tokens before the parser sees it.
A lexer is usually described using simple regex rules that are checked in order. There are tools like lex that can automatically generate lexers from such a description.
[0-9]+ Number [AZ]+ Identifier + Plus
The analyzer, on the other hand, is usually described by indicating grammar. Again, there are tools like yacc that can generate parsers from such a description.
expr ::= expr Plus expr | Number | Identifier
hammar
source share