Point: bottleneck optimization. Reading in a file consists of:
- Opening a file
- reading in its content,
- file closing
- parsing content.
Of these steps, reading is the fastest part to date (I'm not sure about closing, it's syscall, but you don't need to wait for it to complete). Even if this is 10% of everything that I think), then reducing it by half gives only 5% improved performance due to missing comments (which is very bad). For the analyzer, dropping a line starting with C # is not a tangible slowdown. And after that, the comments disappeared, so there can be no decline.
Now imagine that you could improve “reading in a script” by 5% by deleting all comments (which is a really optimistic estimate, see above). How large is the share of “reading in a script” in the total script time consumption? Of course, it depends on how important this is, but since perl scripts usually read at least one more file, this is not more than 50%, but since perl scripts usually do something else, an honest assessment will lead to something in the range of 1% . Thus, the expected increase in efficiency by removing all comments is no more (very optimistic) by 2.5%, but really closer to 0.05%. And then those where it actually gives more than 1% are already fast, since they do almost nothing, so you are optimizing again at the wrong point.
Completion, bottleneck optimization.
Svante
source share