Throw (or respectively) the argument of the NULL function against all of this exploding?

In previous large-scale applications requiring high reliability and durability, I was always there to check the argument of the pointer function when it was documented as "should never be NULL". Then I threw a std::invalid_argument exception or similar if the argument was actually NULL in C ++ and returned an error code in C.

However, I’m starting to think that maybe it’s better to just blow up the application the first time that the NULL pointer is dereferenced in the same function - then the crash dump file will show what happened - and let the thorough test find the calls to the bad functions.

One of the problems with non- NULL checking and the possibility of an application exploding is that if the pointer is not actually dereferenced in this function, but rather stored for later use, it will not be out of context and much more difficult to diagnose.

Any thoughts or recommendations on this?

Edit 1: I forgot to mention that most of our code is a library of third-party developers who may or may not know about our internal error handling rules. But the functions are still documented correctly!

+6
c ++ c validation exception-handling error-handling
source share
11 answers

This is an easy challenge. Getting a NULL pointer when this is not expected is a clear sign of a program error or a highly compromised program state. Any of them will require the programmer to do something. Throwing an exception will only ever work well if it is not caught. If it is caught, you lose very important diagnostic information; it discards all important call stacks that help you find out how it got into this state. If it is not caught, then there is no noticeable difference between the C ++ exception and the hardware exception.

The risk that an exception might be caught somewhere requires you not to throw an exception and just let it die.

This consideration is very different for a runtime environment that can create a call stack after an exception. Normal in managed environments, not in C / C ++.

+1
source share

My personal preference is to capture the fact that the function cannot accept a NULL pointer and leave it at the same time. Please note that there is no guarantee that dereferencing the NULL pointer will fail - if you insist on diagnostics, the only exception is porting the solution.

Having said that, I found that unwanted NULL pointers are very, very rare in my own code. If you have a lot of them, this indicates problems elsewhere, possibly in the basic code design.

+10
source share

Definitely throw a C ++ exception - it can be logged and diagnosed much easier than your program exploding later.

Consider yourself in the following situation. Your program runs for ten days and then accesses this null pointer. If it explodes on the eleventh day, you will have to find out that the problem is related to this null pointer the day before. But if you throw an exception and its text is logged, you can just see how to get started with the journal. In other words, you know for sure that the problem is that the pointer is null. Just think how different it is when the program is on the client’s site, and not in a convenient debugging environment.

+5
source share

My preference is not to check for NULL pointers. A check against NULL is a test of only one of the billions of possible invalid argument values; the rest are undetectable and potentially much more dangerous. If your function documentation states that NULL is a valid argument with a special value, anyone who calls it should assume that all hell breaks if they pass NULL.

If your function is very short and used in tight loops, checking for NULL can be a significant loss in performance. One very real example is the standard C mbrtowc function, used to decode multibyte characters. If your application needs to do single-character byte encoding, a dedicated UTF-8 decoder can be 20% faster than the optimal implementation of UTF-8 mbrtowc , simply because the latter is required to do a bunch of useless checks at the beginning, mostly The NULL pointer checks, and it called many times on very small data.

Even if most of your functions work with big data, it creates a bad use case to tell the caller that it can pass NULL pointers. What if later you need functions that work with small fragments?

If you do not code for embedded systems, NULL is the least dangerous invalid pointer to dereference, as it will immediately lead to an exception at the operating system level (signal / etc.). If you don't want to rely on this, why not just use assert(ptr); so that you can easily enable / disable checks for debugging?

+3
source share

If the function parameter is a pointer that should not be NULL, then you should not change the parameter for the reference? This probably will not change the reporting of when you get a drop using the NULL pointer, but it does allow the static type of chance check to detect a compilation error.

+2
source share

If the first thing you do is check that the parameter is not NULL, then you should use the link.

 void Foo(Bar *const pBar) { if (pBar == NULL) { throw error(); } // Now do something with pBar } 

Change this to:

 void Foo(Bar &bar) { // Now use bar, safe in the knowledge it not NULL. } 

Remember that you can still get the NULL link:

 Bar *const pBar = NULL; Foo(*pBar); // Will crash *inside* of Foo, rather than at the point of dereference. 

This means that you need to check your parameter again.

+2
source share

However, I’m starting to think that maybe it’s better to just blow up the application the first time that the NULL pointer is dereferenced in the same function - then the crash dump file will show what happened - and let the thorough test find the calls to the bad functions.

If you ever consider this, you should use assert() instead. In this way, the application is guaranteed to properly crash and create a core dump.

Throwing an exception that is not going to be caught will untie the stack, and the diagnostic (internal) problem later can be more complicated - unless, of course, you are inside every function that catches the exception, enriching the error message appropriately in order to resolve the problem later . But if you are already handling exceptions to such an extent, then it makes no sense to treat the error as fatal.

In such cases, I prefer to use assert() , followed by an exhaustive unit test. Conventional high-level tests can rarely cover the code enough. For releases, I will disable assert() (using -DNDEBUG ), as in production, in such a rare case, clients usually prefer a module that can work with a flaw, but somehow it does not crash. (The sad reality of commercial software.)

NB If the code is performance sensitive, then statement may be the best choice. You can disable claims in release builds, but you cannot remove exception handling.

Summarizing. If NULL really cannot happen, use assert() . If you already have exception handling, handle NULL as a regular internal error.

PS Another concept regarding the incorrect processing of arguments (rarely implemented, but I saw some real examples in the past) is to reorganize the code in such a way as to exclude the possibility of an invalid argument. The Life Support / Critical / Carrier Support Program uses these design tricks to reduce overhead.

+2
source share

So, the real question: When writing a C ++ / C library, what possible problems should be considered when specifying an exception / error handling strategy in a particular case for selecting an exception throw (C ++ library) or returning an error code (C library), and in Otherwise, blow everything up when the function arguments do not match the documented constraints?

Design by contract any? I think this is what Neil Butterworth describes. If the specification of the function says that if the argument values ​​are within the specified constraints (preconditions), then the output will be within the specified constraints (postconditions) and there will be no violation of any specific invariants. If the user of your library function violates the documented preconditions, then he opens the season for undefined behavior, so beloved by C ++ people. This should also be documented. You can specify that the function will catch and process the “invalid” argument values ​​(C ++ exceptions / error codes), but then the question arises whether they are still invalid arguments, since the function’s answer to this specified precondition must then be specified postcondition.

All this is due to publicly available code / binaries.

For debugging purposes, you can use assertions, etc., to catch invalid arguments, but this is really checking client code that meets the prerequisites. The client code may be library user code, or possibly library code, or both. As a library writer, you will have to speculate on potential issues for various scenarios / use cases regarding specifications, design and implementation issues.

+2
source share

If your application is doing logging, you can simply write something like: functionX(): got NULL pointer to the log file with timestamps or any other relevant context. As for the decision whether to allow it to explode or to check and throw an error ... Well, it really depends on what your application is doing, what are your priorities and so on. I myself would recommend a combination of check / throw exceptions and logs. Analysis of emergency landfills seems like a waste of time.

+1
source share

I also came to the conclusion that it is better to resolve the application crash when handling a rotten pointer is available than checking the parameters for all functions. At the interface level in the public zone, API level, where applications call libraries, there should be checks and correct error handling (with diagnostics in the end), but from there there is no need for repeated checks and checks. Look at the standard lib library, it uses this very philosophy, if you call strlen or memset or qsort using NULL , you get a well-deserved collapse. This makes sense, as in 99.9% of cases you will provide good pointers, and checking this function will be redundant.

I'm not even going to check the selection anymore, in most cases, if the selection fails, it's too late to do something, and the application crash is better than silently (or even noisy) generating crap results. People who monitor the production detection immediately fail and can diagnose or restart work without unnecessary trouble; another error message in the error logs is easily overlooked and can damage the results for several days.

In our project, which I share with a colleague who uses the opposite approach more, there are fewer errors in my code, since the functional code is not diluted in a sea of ​​unnecessary checks. Of the last 10 errors that we had in production, where 9 were from its part of the code, and the accident in my code was fixed in 10 minutes, the stack trace directly showed in which function I made a false assumption about the parameters.

+1
source share

Due to your editing, I see that you are describing a published API. Because of this, you have to think about your users.

Which is better when they debug their code:

  • A kind of null pointer with a stack trace of 3 or more levels in your library code.
  • An exception from one level to the library, which says: "You did it wrong."

I think the answer is quite simple, especially when you look at the error reports for the first case, which would be a waste of your time.

+1
source share

All Articles