Why exactly causes the destructor a second time undefined behavior in C ++?

As mentioned in this answer, just calling the destructor a second time already has undefined behavior 12.4 / 14 (3.8).

For example:

class Class { public: ~Class() {} }; // somewhere in code: { Class* object = new Class(); object->~Class(); delete object; // UB because at this point the destructor call is attempted again } 

In this example, the class is designed so that the destructor can be called several times - things like double deletion cannot happen. The memory is still allocated at the point where delete is called - the first call to the destructor does not call ::operator delete() to free the memory.

For example, in Visual C ++ 9, the above code looks workable. Even C ++ - defining UB does not directly prohibit working like UB. Thus, for the above code, it is necessary to fulfill some implementation and / or platform requirements.

Why is this code broken and under what conditions?

+15
c ++ memory-management undefined-behavior memory destructor
May 05 '10 at 8:26
source share
16 answers

Destructors are not regular functions. Calling one does not call one function, it calls many functions. This is the magic of destructors. Although you provided a trivial destructor for the sole purpose of making it hard to show how it might break, you could not demonstrate what other functions that are called do. And the standard too. Its in those functions that things can potentially fall apart.

As a trivial example, let's say the compiler inserts code to track the lifetime of objects for debugging purposes. The constructor [which is also a magical function that does all kinds of things that you haven’t asked for) stores some data somewhere that says "Here I am." Before calling the destructor, he changes this data to say "There I go." After calling the destructor, he gets rid of the information that he used to search for this data. Therefore, the next time you call the destructor, you will receive an access violation.

You might also come up with examples that include virtual tables, but your sample code did not include any virtual functions to spoof.

+3
May 05 '10 at 9:00
source share

I think your question is aimed at substantiating the standard. Think of it differently:

  • Defining the behavior of calling the destructor twice creates a job, possibly a lot of work.
  • In your example, it is shown that in some trivial cases it will not be a problem to call the destructor twice. It is true, but not very interesting.
  • You did not give a convincing use case (and I doubt you can) when calling the destructor twice - this is a good idea / simplifies the code / makes the language more powerful / clears the semantics / or something else.

So why does this one again not cause undefined behavior?

+13
May 05 '10 at 8:44 a.m.
source share

The reason for the wording in the standard is most likely that all the others will be much more complicated : itd should determine when an even double deletion is possible (or vice versa) - i.e. either with a trivial destructor, or with a destructor whose side effect can be discarded.

On the other hand, theres no benefit to this behavior. In practice, you cannot benefit from this because you do not know at all whether the class destructor fits the above criteria or not. No general purpose code can rely on this. It would be very easy to introduce errors this way. And finally, how does this help? It just allows you to write sloppy code that does not track the life of its objects - incomplete code, in other words. Why should the standard support this?




Will existing compilers / battery life break your specific code? Probably not - unless they have special runtime checks to prevent illegal access (to prevent malicious code from appearing or just to protect against leakage).

+8
May 05 '10 at 9:03
source share

The object no longer exists after calling the destructor.

So, if you call it again, you call the method on an object that does not exist .

Why will this behavior ever be determined? The compiler can reset the memory of the object that was destroyed for debugging / protection / for some reason, or process its memory with another object as an optimization or something else. Implementation is optional. The call to the destructor again essentially calls a method for arbitrary raw memory - Bad Idea (tm).

+8
May 05 '10 at 9:44 a.m.
source share

When you use C ++ tools to create and destroy your objects, you agree to use your object model, however it is implemented.

Some implementations may be more sensitive than others. For example, an interactive interpreted environment or debugger may be more complex to be introspective. This may even include a specific warning about double destruction.

Some objects are more complex than others. For example, virtual destructors with virtual base classes can be a little hairy. The dynamic type of an object changes when a sequence of virtual destructors is executed, if I remember correctly. This can easily lead to an unacceptable condition at the end.

It is easy enough to declare correctly named functions for use instead of overusing the constructor and destructor. The object-oriented direct C is still possible in C ++ and may be the right tool for some kind of task ... in any case, the destructor is not the right construction for every destruction task.

+4
May 05 '10 at 9:02
source share

The following Class will fail on Windows on my computer if you invoke the destructor twice:

 class Class { public: Class() { x = new int; } ~Class() { delete x; x = (int*)0xbaadf00d; } int* x; }; 

I can imagine an implementation when it works with a trivial destructor. For example, such an implementation can delete destroyed objects from physical memory, and any access to them will lead to some hardware failure. It seems that Visual C ++ is not one of these implementations, but who knows.

+3
May 05 '10 at 8:34
source share

Standard 12.4 / 14

As soon as the destructor is called for the object, the object no longer exists; undefined behavior if the destructor is called for an object whose lifetime has expired (3.8).

I think this section refers to calling the destructor with delete. In other words: The essence of this paragraph is that "deleting an object twice is undefined behavior." Therefore, why is your sample code working fine.

However, this question is quite academic. Destructors are intended to be called via delete (except for the exclusion of objects allocated using the-new placement, which are correctly observed). If you want to split the code between the destructor and the second function, just extract the code into a separate function and call it from your destructor.

+2
May 05 '10 at 8:36 a.m.
source share

This is undefined because if it weren’t, each implementation would have to bookmark through some metadata whether the object is still alive or not. You will have to pay this cost for each individual object that contradicts the basic C ++ design rules.

+1
May 05 '10 at 9:50
source share

Since what you are really asking for is a plausible implementation in which your code does not work, suppose your implementation provides a convenient debugging mode in which it tracks all memory allocations and all calls to constructors and destructors. Therefore, after an explicit call to the destructor, it sets a flag to say that the object was destroyed. delete checks this flag and stops the program when it detects signs of an error in your code.

In order for your code to work the way you intended, this implementation for debugging would have to use a special do-nothing descriptor and skip setting this flag. That is, it would have to suggest that you intentionally destroy twice because (you think) the destructor does nothing, unlike the assumption that you accidentally destroyed twice, but could not detect the error, because the destructor does nothing Either you are careless or you are a rebel, and there is more mileage in debugging implementations that help people who are careless than in indulging rebels; -)

+1
May 05 '10 at 11:21 a.m.
source share

One important implementation example that might break:

An appropriate C ++ implementation may support garbage collection. This has been a long-standing design goal. The GC may assume that the object may be GC'ed immediately when its dtor is launched. This way, every dtor call will update its internal GC account. The second time that dtor is called for the same pointer, the GC data structures can become very bad.

+1
May 6 '10 at 9:39 a.m.
source share

By definition, a destructor "destroys" an object and destroys an object twice, it makes no sense.

Your example works, but its difficult, which usually works

0
May 05 '10 at 8:29 a.m.
source share

I assume it was classified as undefined because most double deletions are dangerous, and the standardization committee did not want to add an exception to the standard for a relatively small number of cases where they should not be.

As for where your code might break ;, you might find that your code breaks into debug builds on some compilers; many compilers see UB as "doing something that does not affect performance for well-defined behavior" in release mode, and "inserting checks to detect bad behavior" in debug builds.

0
May 05 '10 at 8:40
source share

In principle, as already indicated, a call to the destructor a second time will fail for any destructor of the class that does the work.

0
May 05 '10 at 8:42 a.m.
source share

The reason is because your class can, for example, be counted using a smart pointer. Thus, the destructor decreases the reference count. As soon as this counter reaches 0, the actual object should be cleared.

But if you call the destructor twice, the account will be corrupted.

Same idea for other situations. Perhaps the destructor writes 0s to a piece of memory and then frees it up (so you don't accidentally leave the user password in memory). If you try to write to this memory again - after it is freed, you will receive an access violation.

It just makes sense for objects to be built once and destroyed once.

0
May 05 '10 at 8:45
source share

This behavior is undefined, because the standard made it clear what the destructor is used for, and did not decide what would happen if you use it incorrectly. Undefined behavior does not necessarily mean "crashy smashy", it just means that the standard did not define it, so it remained until implementation.

While I am not very fluent in C ++, my gut tells me that the implementation is encouraged to either consider the destructor as just another member function, or actually destroy the object when the destructor is called. Thus, it may break in some implementations, but it may not be in others. Who knows, this is undefined (note that demons fly out of the nose if you try).

0
May 05 '10 at 8:46 a.m.
source share

The reason is that in the absence of this rule your programs will become less strict. Being more strict - even if it is not used at compile time - is good, because in return you get more predictability of program behavior. This is especially important when the source code for the classes is not under your control.

Many concepts: RAII, smart pointers, and just the general allocation / freeing of memory all depend on this rule. The number of times that the destructor will be called (one) is significant for them. Therefore, documentation for such things usually promises: "Use our classes in accordance with the rules of the C ++ language, and they will work correctly!"

If there weren’t such a rule, it would indicate how to "Use our classes in accordance with the C ++ lanugage rules, and yes, do not call its destructor twice, then they will work correctly." Many specifications will sound that way. This concept is too important for the language to miss in a standard document.

That's why. Nothing related to binary internals (which are described in Potatoswatter's answer ).

0
May 05 '10 at 9:41 a.m.
source share



All Articles