This is either an exaggeration, a generalization, or a joke, or Chandler’s idea “Absolutely reasonable performance” (using modern C ++ toolschains / libs) is unacceptable for my programs.
I find it a rather narrow circle of optimizations. Penalties exist outside this area that cannot be ignored due to the actual complexities and designs found in the programs. Heap allocation was an example for the getline example. Specific optimizations may or may not always be applicable to the program in question, despite your attempts to reduce them. Real world structures will refer to memory, which may be an alias. You can reduce this, but it is impractical to believe that you can eliminate aliasing (from an optimizer's point of view).
Of course, RBV can be great - it just doesn't fit all cases. Even the link to which you referred indicated how to avoid a ton of distributions / releases. The actual programs and data structures found in them are much more complex.
Later in the conversation, he continues to criticize the use of member functions (ref: S::compute() ). Of course, it makes sense to remove, but is it really reasonable to avoid using these language functions completely, because it makes the work of the optimizer easier? No. Will this always lead to more readable programs? No. Do these code conversions always lead to significantly faster programs? No. Are the changes needed to transform your codebase for the duration of your investment? Sometimes. Can you remove some points and make more informed decisions that affect your existing or future code base? Yes.
Sometimes this helps break down how your program will run, or how it will look in C.
The optimizer will not solve all performance problems, and you should not rewrite programs with the assumption that the programs you are dealing with are “completely dead brains and broken projects”, and you should not assume that using RBV will always result in “Perfectly reasonable performance. " You can use new language features and simplify the work of the optimizer, although there is much to gain, there are often more important optimizations to invest your time.
Good to consider the proposed changes; Ideally, you would measure the impact of such changes in real-time execution and affect the source code before accepting these suggestions.
In your example: even copying + assigning large structures at a cost can have significant costs. In addition to the costs of running constructors and destructors (along with creating / cleaning the resources that they acquire and own, as indicated in the link you link to), even simple things like eliminating unnecessary structural copies can save a lot of CPU if you use links (where necessary). A copy of the structure can be as simple as memcpy . These are not far-fetched problems; they appear in real programs, and the complexity can increase significantly with your program complexity. Is a reduction in the imposition of some memory and other optimizations a cost, and does it lead to "Absolutely reasonable performance"? Not always.