In the C ++ standard, calling delete for something allocated using new[] is just undefined behavior, as calling delete[] something allocated using new .
In practice, new[] allocates memory through something like malloc , like new . delete destroy the pointed object and then send memory to something like free . delete[] destroy all the objects in the array, and then send the memory to something like free . Some additional memory can be allocated new[] to go to delete[] , to give delete[] number of elements to be destroyed or not.
If actual malloc / free , then some implementations will allow the pointer anywhere in the malloc'd block to be used. Others will not. The exact value must be passed to free , as you received from malloc , in order for this to be determined. This raises the problem if new[] allocated some extra room for the size step of the array / element and got stuck in front of the block, then delete pointer-to-first element is passed, then delete will pass free different pointer than new[] obtained from malloc . (I think there is an architecture where something like this happens.)
Like most undefined actions, you can no longer rely on auditing the code you write, but instead, now you intend to audit both the created assembly and the standard C / C ++ libraries that you interact with before you can determine if the behavior you want to do is correct. In practice, this is a burden that will not be fulfilled, so your code will turn out to be negative, even if you verify that everything works as you expect when you really checked. How do you make sure that an identical check (the resulting binary code and its behavior) happens every time the compiler version, standard library version, OS version, system libraries or compiler change?
Yakk source share