Is the speed (main / only viable) of an interpreted programming language implementation a criterion today?
What will be the optimal balance between speed and abstraction?
Should scripting languages completely ignore all thoughts of performance and simply follow the concepts of rapid development, readability, etc.?
I ask about this because I am currently developing some experimental languages and interpreters
, , , - -, , . . , .
, , , , , , - JIT ,
, - . : CLR (+ DLR) JVM. , , , JIT, (, , .)
, , , , , .
, .
. .
, , , , , - . - Lua, ( ) - (C Lua), . -, , .
, . , , , , , . , ( ) , (, - , , C, , ).
. , , , , . , , , ; , + . ( , .)
- , , . , , , . , , , . JavaScript.
API, . Lua, 90% Lua C, , . , ++ API, , .
Ultimately, “premature optimization is the root of all evil”, that is, your language needs some great features, and it should be fast. It is not good to be fast if it is used as an assembler, and the user must implement the operating system.