Defining a NULL Macro in C ++
Leon is right. that if there are several overloads for the same function, \0 prefer the one that takes a parameter of type char . However, it is important to note that in a typical NULL compiler, it will prefer overloading, which takes an argument of type int rather than type void* !
What probably causes this confusion is that the C language allows you to define NULL as (void*)0 . The C ++ standard explicitly states (draft N3936, p. 444) :
Possible definitions [from the NULL macro] include 0 and 0L , but not (void*)0 .
This restriction is necessary because, for example, char *p = (void*)0 is a valid C, but invalid C ++, whereas char *p = 0 is valid in both cases.
In C ++ 11 and later, you should use nullptr if you need a null constant that behaves like a pointer.
How Leonβs proposal works in practice
This code defines several overloads of one function. Each overload displays a parameter type:
#include <iostream> void f(int) { std::cout << "int" << std::endl; } void f(long) { std::cout << "long" << std::endl; } void f(char) { std::cout << "char" << std::endl; } void f(void*) { std::cout << "void*" << std::endl; } int main() { f(0); f(NULL); f('\0'); f(nullptr); }
In Ideone, this outputs
int int char void*
Therefore, I would say that the problem with overloads is not an actual application, but a pathological case. The NULL constant will behave incorrectly and should be replaced with nullptr in C ++ 11.
What if null is not zero?
Another pathological case was proposed by Andrew Keaton on another issue:
Note that this is a null pointer in C. It does not matter for the underlying architecture. If the underlying architecture has a null pointer value defined as the address 0xDEADBEEF, then the compiler should sort this mess.
Thus, even in this ridiculous architecture, there are still ways to check for a null pointer:
if (!pointer) if (pointer == NULL) if (pointer == 0)
The following are WRONG ways to check for a null pointer:
#define MYNULL (void *) 0xDEADBEEF if (pointer == MYNULL) if (pointer == 0xDEADBEEF)
since they are considered by the compiler as normal comparisons.
Summary
In general, I would say that the differences are mostly stylistic. If you have a function that accepts int and overloads that accepts char , and they work differently, you will notice the difference when you call them with constants \0 and NULL . But as soon as you put these constants in variables, the difference disappears, because the called function is subtracted from the type of the variable.
Using the right constants makes the code more maintainable and conveys the value better. You should use 0 when you mean a number, \0 when you mean a character, and nullptr when you mean a pointer. Matthieu M. points out in the comments that GCC had an error in which a char* compared with \0 , while the goal was to dereference the pointer and compare a char with \0 . Such errors are easier to detect if the correct style is used thoroughly for the code base.
To answer your question, there really is no actual use case that would prevent you from using \0 and NULL interchangeably. Just stylistic reasons and some extreme cases.