The question is, “why do new languages still have expressions, and not expressions exclusively?”, Right?
The programming language constructs address various problems, for example.
- simple grammar
- simple implementation
- simple semantics
is one of the most theoretical design goals and
- execution speed of the resulting compiled code
- compilation speed
- program expenditure
- ease of use (e.g. easy to read)
is one of the most practical ...
These design goals do not have clear definitions, for example. a short grammar is not necessarily the one with the cleanest structure, and which one is simpler?
(considering your example)
For ease of use or reading code, a language designer may require that you write "return" before the view (or rather, the expression) that results from the function. This is a return statement. If you can leave a “return”, this is still implied, and it can still be considered as a return statement (this is just not so obvious in the code). If it is considered as an expression, this implies the semantics of substitution, for example, Schema, but probably not Python. From a syntactic point of view, it makes sense to distinguish between expressions and expressions where "return" is required.
Looking at the machine code (which I did not do a lot, so I could be wrong), it seems to me that there are only statements, without expressions.
eg. your example:
ld r1, 5 ld r2, 5 add r3, r1, r2 ret r3
(I'm doing it, obviously)
Thus, for people who like to think about how the kernel actually works (von Neumann), or who want to simplify compilation for such a targeted architecture, statements are a way.
There is also a specific expression “evil” (as in non-functional) assignment. This is necessary to express loop contours without recursion. According to Dijkstra, loops have simpler semantics than recursion (see EW Dijkstra, Programming Discipline, 1976). The loop is faster and consumes less memory than recursion. If your language is not optimized for tail recursion (e.g. schemas).