On a platform where NULL is represented as 0, the compiler ever created has unexpected code for NULL <= p

In C99, equality == doesn't look undefined yet. It can produce 1 accident if you apply it to the wrong addresses (for example, &x + 1 == &y may be true by accident). It does not create undefined behavior. Many, but not all, invalid addresses: undefined for calculation / use according to the standard, so in p == &x with p dangling pointer or in &x + 2 == &y invalid address causes undefined behavior, not == .

On the other hand, >= and other undefined comparisons apply to pointers that don't point inside the same object. This includes testing q >= NULL , where q is a valid pointer. This test is the subject of my question.

I am working on a static analyzer for low level inline code. It is normal for such code to do something outside of what the standard allows. As an example, an array of pointers can be initialized in this kind of code using memset(...,0,...) , although the standard does not indicate that NULL and 0 should have the same representation. To be useful, the analyzer must accept such things and interpret them as the programmer expects. Warning that the programmer will be perceived as false.

So, the analyzer already assumes that NULL and 0 have the same representation (you should check your compiler on the analyzer to make sure that they agree with such assumptions). I notice that some programs compare valid pointers with NULL with >= ( this library is an example). This works as intended, as long as NULL is represented as 0 , and the pointer comparison is compiled as an unsigned integer comparison. I just want the analyzer to warn about this, if, perhaps due to some aggressive optimization, it can be compiled into something different from what the programmer had in mind on regular platforms. So my question is: is there any example of a program that does not evaluate q >= NULL as 1 on a platform where NULL is represented as 0 ?

NOTE. This question is not about using 0 in the context of a pointer to get a null pointer. The assumption of a NULL representation is a real assumption since there is no conversion in the memset() example.

+4
source share
1 answer

There are certain pointers which, when you reinterpret them as an integer with a pointer sign, will have a negative sign.

In particular, all kernel memory is on Win32, and if you use a "large address", then even 1 GB of user space, since you get 3 GB of user space.

I do not know the details of c-pointer arithmetic, but I suspect that they can be compared as <0 in some compilers.

+2
source

All Articles