Any traps using char * instead of void * when writing cross-platform code?

Are there any pitfalls when using char * to write cross-platform code that has memory access?

UPDATE: For example, should you check before splitting a dereferenced char * into a specific type (for example, int) if the address is aligned to the size of that type? Will some architectures return strange results with constant access?

I am working on a replay memory allocator to better understand how to debug memmory problems. I came to the conclusion that char * is preferable because of the ability to do pointer arithmetic and cast them in void *, is that true? Are the following assumptions acceptable on different common platforms?

sizeof(char) == 1 sizeof(char*) == sizeof(void*) sizeof(char*) == sizeof(size_t) 
+7
source share
1 answer

sizeof(char)==1 definitely always true.

sizeof(char *) == sizeof(void *) is probably always true. Standard C requires that they have the same representation, which at least implies the same size.

sizeof(char *) == sizeof(size_t) definitely not to be relied upon - I know implementations for which this is a lie (and although they probably don't fully comply with the standard, this is not one of their problems).

+5
source

All Articles