Question about optimization in C ++

I read that the C ++ standard allows you to optimize the point where it can actually hamper the expected functionality. When I say this, I am talking about optimizing the return value, where you might have some kind of logic in the copy constructor, but the compiler optimizes the call.

I think this is somewhat bad, as someone who does not know that it can take some time to fix the error that resulted from this.

I want to know if there are other situations where excessive optimization from the compiler can change functionality.

For example, something like:

int x = 1; x = 1; x = 1; x = 1; 

can be optimized to one x = 1;

Suppose I have:

 class A; A a = b; a = b; a = b; 

Could this also be optimized? This is probably not the best example, but I hope you know what I mean ...

+7
source share
6 answers

Elite copy operations are the only time that the compiler is allowed to optimize the point at which strong side effects change markedly. Do not rely on calls to instantiated instances; the compiler can optimize these calls.

For the rest, the “as is” rule applies: the compiler can optimize as it sees fit if the visible side effects are the same as if the compiler hadn't optimized at all.

(“Visible side effects” include, for example, material written to the console or file system, but not the run time and processor fan speed.)

+12
source

It can be optimized, yes. But you still have some control over the process, for example, suppose the code:

 int x = 1; x = 1; x = 1; x = 1; volatile int y = 1; y = 1; y = 1; y = 1; 

Provided that neither x nor y are used below this fragment, VS 2010 generates code:

  int x = 1;
     x is 1;
     x is 1;
     x is 1;
     volatile int y = 1;
 010B1004 xor eax, eax  
 010B1006 inc eax  
 010B1007 mov dword ptr [y], eax  
     y = 1;
 010B100A mov dword ptr [y], eax  
     y = 1;
 010B100D mov dword ptr [y], eax  
     y = 1;
 010B1010 mov dword ptr [y], eax  

That is, optimization breaks all the lines into "x" and leaves all four lines with "y". This is how volatile works, but the fact is that you still have control over what the compiler does for you.

Whether it is a class or a primitive type - it all depends on the compiler, how complex its optimization restrictions are.

Another piece of code to learn:

 class A { private: int c; public: A(int b) { *this = b; } A& operator = (int b) { c = b; return *this; } }; int _tmain(int argc, _TCHAR* argv[]) { int b = 0; A a = b; a = b; a = b; return 0; } 

Optimization Visual Studio 2010 breaks all the code into nothing, in the release build with "full optimization" _tmain does nothing and immediately returns zero.

+3
source

This will depend on how class A is implemented, whether the compiler can see the implementation, and whether it is smart enough. For example, if operator=() in class A has some side effects, this optimization may change the behavior of the program and is not possible.

+1
source

Optimization (in the proper order) does not "delete calls for copying or assignment." It transforms the final machine into another final state, a machine with the same external behavior.

Now if you re call

 a=b; a=b; a=b; 

what the compiler does depends on what operator= actually is. If the compiler detects that the call has no chance to change the state of the program (and the “state of the program” is “everything lives longer than the region to which the region can access”), it will disable it. If this cannot be "demonstrated", the challenge will remain in place.

No matter what the compiler does, do not worry too much: the compiler cannot (by contract) change the external logic of a program or part of it.

0
source

I don't know what C ++ is, but I'm currently reading Compilers-Principles, methods and tools

Here is a snippet from their section on code optimization:

the machine-independent phase of code optimization is trying to improve in order to get the best target code. Generally, a better tool is faster, but other goals, such as a shorter code or a target code that uses less energy, may be desirable. for example, a simple algorithm generates intermediate code (1.3) using the instruction for each operator in a tree view that comes from the semantic analyzer. Simple intermediate code generation An algorithm followed by code optimization is a smart way to generate good target code. The optimizer can convert 60 from an integer to a floating point can be done once and for all at compile time, so the operation inttofloat can be eliminated by replacing the integer 6 with a floating point number of 60.0. in addition, t3 is used only once to pass its id1 value, so the optimizer can convert 1.3 to a shorter sequence (1.4)

 1.3 t1 - intoffloat(60 t2 -- id3 * id1 ts -- id2 + t2 id1 t3 1.4 t1=id3 * 60.0 id1 = id2 + t1 

everything and everything that I want to say that code optimization should go at a much deeper level and because the code in such a simple state does not affect what your code does

-one
source

I had problems with constant variables and const_cast . The compiler produced incorrect results when it was used to calculate something else. The constant variable has been optimized; its old value has been converted to a compile-time constant. Truly "unexpected behavior." Ok, maybe not;)

Example:

 const int x = 2; const_cast<int&>(x) = 3; int y = x * 2; cout << y << endl; 
-one
source

All Articles