Does delete (non array form) mean the total amount of memory allocated by new or new []

This question has been asked as a part. Does [] remove the free memory at a time after calling the destructors? but moved as a separate issue.

It seems ( Correct me if it is wrong ) that the only difference between delete and delete[] is that delete[] will get information about the size of the array and will call destructors on all of them, while delete destroy only the first. In particular, delete also has access to information about how much shared memory is allocated new[] .

If you do not want to destroy the dynamically allocated elements of the array and only care that the memory allocated by either new or new[] is freed, delete seems to be able to do the same job.

This How to remove [] "know" the size of the array of operands? the accepted answer has one comment from @ AnT and I quote

Also note that the counter of array elements is needed only for types with a non-trivial destructor. For types with a trivial destructor, the counter is not kept new [] and, of course, is not retrieved by deleting []

This comment assumes that, as a whole, the delete expression knows the amount of allocated memory and, therefore, knows how much memory it frees per shot at the end, even if the memory holds an array of elements. Therefore, if you write

 auto pi = new int[10]; ... delete pi; 

Although the standard considers this to be UB, in most implementations it should not be a memory leak (although this is not portable), right?

+2
source share
3 answers

It is right. The difference between delete and delete [] is that the latter knows the number of elements allocated in the array and calls the destructor for each object on them. To be 100% correct, both actually "know" - the number of elements allocated to the array is equal to the size of the allocated memory (which both know) divided by the size of the object.

One may ask, why should we delete [] and delete, than - why can’t the same calculations be deleted? The answer is polymorphism. The size of the allocated memory will not equal the size of the static objec if the deletion is performed using a pointer to the base class.

On the other hand, delete [] does not take into account the possibility of polymorphizing an object, and therefore dynamic arrays should never be considered as polymorphic objects (that is, they are selected and saved as a pointer to the base class).

As for memory leaks, deletion will not be memory leaks in the case of POD types when used on arrays.

0
source

In the C ++ standard, calling delete for something allocated using new[] is just undefined behavior, as calling delete[] something allocated using new .

In practice, new[] allocates memory through something like malloc , like new . delete destroy the pointed object and then send memory to something like free . delete[] destroy all the objects in the array, and then send the memory to something like free . Some additional memory can be allocated new[] to go to delete[] , to give delete[] number of elements to be destroyed or not.

If actual malloc / free , then some implementations will allow the pointer anywhere in the malloc'd block to be used. Others will not. The exact value must be passed to free , as you received from malloc , in order for this to be determined. This raises the problem if new[] allocated some extra room for the size step of the array / element and got stuck in front of the block, then delete pointer-to-first element is passed, then delete will pass free different pointer than new[] obtained from malloc . (I think there is an architecture where something like this happens.)

Like most undefined actions, you can no longer rely on auditing the code you write, but instead, now you intend to audit both the created assembly and the standard C / C ++ libraries that you interact with before you can determine if the behavior you want to do is correct. In practice, this is a burden that will not be fulfilled, so your code will turn out to be negative, even if you verify that everything works as you expect when you really checked. How do you make sure that an identical check (the resulting binary code and its behavior) happens every time the compiler version, standard library version, OS version, system libraries or compiler change?

+3
source

The specific reason to avoid all constructions that provoke undefined behavior, even if you cannot understand how they might be mistaken, is that the compiler has the right to assume that undefined behavior never occurs , For example, given this program ...

 #include <iostream> #include <cstring> int main(int argc, char **argv) { if (argc > 0) { size_t *x = new size_t[argc]; for (int i = 0; i < argc; i++) x[i] = std::strlen(argv[i]); std::cout << x[0] << '\n'; delete x; } return 0; } 

... the compiler can emit the same machine code as for ...

 int main(void) { return 0; } 

... because the undefined behavior on the control path argc > 0 means that the compiler can assume that the path never executes.

0
source

All Articles