(Comment below "Heads-up: this answer was moved here from Memory corruption causes undefined behavior? " - you will probably have to read this question to get the correct background for this O_o answer).
It seems to me that this part of the Standard explicitly allows you to:
having your own memory pool in which you place the placement objects — new , then release / reuse all this without wasting time calling your destructors , unless you are dependent on the side effects of the object destructors .
libraries that allocate a little memory and never release it, probably because their functions / objects can be used by destructors of static objects and registered exit handlers, and you should not buy it in the whole ordered order - destruction or transitional "phoenix" - like rebirth every time these appeals occur.
I can’t understand why the Standard chooses to leave the behavior undefined when there are dependencies on the side effects, and not just say that these side effects will not happen, and let the program determine or undefined behavior, as you usually expected, given this premise.
We can still assume that the standard says that this behavior is undefined. The most important part:
"depends on the side effects created by the destructor, has undefined behavior.
Standard §1.9 / 12 explicitly defines side effects as follows (in Italics below are the Standards indicating the introduction of a formal definition):
Access to the object indicated by the value volatile glvalue (3.10), modifying the object, calling the library I / O function, or calling the function that performs any of these operations are all side effects that are changes in the state of the runtime environment.
There is no dependency in your program, so the behavior is undefined.
One example of a dependency suitable for the scenario in 3.8 p4, where the need or reason for undefined behavior is not obvious, is as follows:
struct X { ~X() { std::cout << "bye!\n"; } }; int main() { new X(); }
The debatable question about the issues is whether the X object discussed above will be released above for purposes of 3.8 p4, since it is probably only released on the OS after the program is completed - it is not clear from reading the Standard whether this stage of the process is “life-time” within Standard behavioral requirements (my quick search for the Standard did not specify this). I personally would risk that 3.8p4 is applied here, partly because as long as it is ambiguous enough to claim that the script writer may be allowed to allow undefined behavior in this script, but even if the above code is not release, the script easily changed ala ...
int main() { X* p = new X(); *(char*)p = 'x';
In any case, however, the main implemented destructor above has a side effect - for "calling the library I / O function"; In addition, the observed behavior of the program may “depend” on it in the sense that the buffers that the destructor would have affected were launched upon short-term startup. But “depends on side effects” means only hints of a situation where the program would obviously have undefined behavior if the destructor did not work? I would be mistaken on the side of the first, especially since the latter case would not need a special paragraph in the Standard to fix that the behavior is undefined. Here is an example with explicitly-undefined behavior:
int* p_; struct X { ~X() { if (b_) p_ = 0; else delete p_; } bool b_; }; X x{true}; int main() { p_ = new int(); delete p_;
When the X destructor is called at completion time, b_ will be false , and ~X() will therefore be delete p_ for the pointer that has already been freed, creating undefined behavior. If x.~X(); was called before reuse , p_ would be set to 0, and deleting would be safe. In this sense, the correct behavior of the program can be said that it depends on the destructor, and the behavior is clearly undefined, but we just created a program that in itself corresponds to the described behavior of 3.8p4, instead of having the behavior of corollary 3.8p4 ...?
More complex problem scenarios — too long to provide code for — might include, for example, a weird C ++ library with reference counters inside file stream objects that should hit 0 to cause some processing, such as flushing input- output or combining background threads, etc. - where the refusal to perform these actions risked not only the inability to execute the output, the explicitly requested destructor, but also could not output other buffered output from the stream or to any OS with a transactional file system, which could lead to the rollback of previous I / O operations - such problems can change the behavior of the observed program or even leave the program freezes.
Note: there is no need to prove that there is any real code that behaves strangely in any existing compiler / system; The standard explicitly reserves the right for compilers to have undefined behavior ... that it all matters. This is not something you can reason about and don’t want to ignore the Standard - it is possible that C ++ 14 or some other revision changes this condition, but as long as it is there, if there is even a certain “dependence” on side effects, then there is the potential for undefined behavior (which, of course, can itself be determined by a specific compiler / implementation, so it does not automatically mean that every compiler is required to do something strange).