However, I’m starting to think that maybe it’s better to just blow up the application the first time that the NULL pointer is dereferenced in the same function - then the crash dump file will show what happened - and let the thorough test find the calls to the bad functions.
If you ever consider this, you should use assert() instead. In this way, the application is guaranteed to properly crash and create a core dump.
Throwing an exception that is not going to be caught will untie the stack, and the diagnostic (internal) problem later can be more complicated - unless, of course, you are inside every function that catches the exception, enriching the error message appropriately in order to resolve the problem later . But if you are already handling exceptions to such an extent, then it makes no sense to treat the error as fatal.
In such cases, I prefer to use assert() , followed by an exhaustive unit test. Conventional high-level tests can rarely cover the code enough. For releases, I will disable assert() (using -DNDEBUG ), as in production, in such a rare case, clients usually prefer a module that can work with a flaw, but somehow it does not crash. (The sad reality of commercial software.)
NB If the code is performance sensitive, then statement may be the best choice. You can disable claims in release builds, but you cannot remove exception handling.
Summarizing. If NULL really cannot happen, use assert() . If you already have exception handling, handle NULL as a regular internal error.
PS Another concept regarding the incorrect processing of arguments (rarely implemented, but I saw some real examples in the past) is to reorganize the code in such a way as to exclude the possibility of an invalid argument. The Life Support / Critical / Carrier Support Program uses these design tricks to reduce overhead.
Dummy00001
source share