What exactly happens if you delete an object? (gcc) (When is double deletion reset?)

Please note that I do not want to solve any problems with my question - I thought about the likelihood of what happened, and therefore I wondered:

What exactly happens if you delete the object and use gcc as a compiler?

Last week, I investigated a crash when a race condition caused the object to be deleted twice.

The accident occurred when the virtual destructor of the object was called, because the pointer to the virtual function table was already overwritten.

Is a virtual function pointer overwritten by the first delete?

If this is not the case, is this the second safe safe, unless a new memory allocation is allocated?

I am wondering why the problem that I had was not recognized before, and the only exception is that either the virtual function table is overwritten immediately during the first deletion, or the second deletion is not reset.

(The first means that an accident always occurs in the same place if a "race" occurs - the second, which usually does not happen when a race occurs - and only if the third thread overwrites the delete object, in the meantime, a problem arises.)


Edit / Update:

I ran a test, the following code will fail using segfault (gcc 4.4, i686 and amd64):

class M { private: int* ptr; public: M() { ptr = new int[1]; } virtual ~M() {delete ptr;} }; int main(int argc, char** argv) { M* ptr = new M(); delete ptr; delete ptr; } 

If I remove the "virtual" from dtor, the program will be interrupted by glibc because it detects a double release. In a "virtual" failure, an indirect function is called when the destructor is called, because the pointer to the virtual function table is invalid.

On both amd64 and i686, the pointer points to a valid memory area (heap), but the value is invalid there (counter? Is it very low, for example, 0x11 or 0x21), so the "call" (or "jmp" when the compiler performed the optimization of return) goes to invalid area.

SIGSEGV program signal,

Segmentation error. 0x0000000000000021

in ?? () (gdb)

# 0 0x0000000000000021 in ?? ()

# 1 0x000000000040083e in main ()

Thus, with the above conditions, the pointer to the virtual function table is ALWAYS overwritten by the first deletion, so the next deletion will go to nirvana if the class has a virtual destructor.

+6
c ++ gcc probability delete-operator postmortem-debugging
source share
3 answers

It very much depends on the implementation of the memory allocator itself, not to mention any application-specific failures, like trimming a v-table of an object. There are many memory allocation schemes, all of which differ in capabilities and double free () resistance, but they all have one common property: your application will crash some time after the second free ().

The reason for the failure is usually that the memory allocator allocates a small amount of memory before (header) and after (footer) each allocated memory fragment to store some specific implementation details. The header usually determines the size of the piece and the address of the next fragment. The footer is usually a pointer to the title of the piece. Typically, removing twice at least involves checking if neighboring pieces are free. Thus, your program will fail if:

1) the pointer to the next fragment was overwritten, and the second free () calls segfault when trying to access the next fragment.

2) the footer of the previous fragment has been changed, and access to the header of the previous fragment causes segfault.

If the application is saved, this means that free () either has damaged memory in different places, or adds a free piece that overlaps one of the already free fragments, which will lead to data corruption in the future. In the end, your program will segfault on one of the following free () or malloc () related to damaged memory areas.

+6
source share

Removing something twice - this is undefined behavior - you don't need to explain it anymore, and it's generally useless to look for it. This may cause the program to crash, it may not be so, but it is always bad, and the program will always be in an unknown state after you have done it.

+6
source share

By executing delete twice (or even free ), the memory may already be redistributed, and when executing delete memory corruption may occur again. The size of the allocated memory block is often carried out immediately before the memory block.

If you have a derived class, do not call delete on the derived class (child). If it is not declared virtual, then only the ~BaseClass() destructor ~BaseClass() , leaving any allocated memory from DerivedClass to save and leak. This suggests that DerivedClass has extra memory allocated higher and higher than BaseClass , which should be freed.

i.e.

 BaseClass* obj_ptr = new DerivedClass; // Allowed due to polymorphism. ... delete obj_ptr; // this will call the destructor ~Parent() and NOT ~Child() 
+1
source share

All Articles