While reading Hacking: Art of Exploitation (a wonderful book!), I came across this function:
void binary_print(unsigned int value) {
unsigned int mask = 0xff000000;
unsigned int shift = 256*256*256;
unsigned int byte, byte_iterator, bit_iterator;
for(byte_iterator=0; byte_iterator < 4; byte_iterator++) {
byte = (value & mask) / shift;
printf(" ");
for(bit_iterator=0; bit_iterator < 8; bit_iterator++) {
if(byte & 0x80)
printf("1");
else
printf("0");
byte *= 2;
}
mask /= 256;
shift /= 256;
}
}
Here's the I / O table for the function:
= = = = = = = = = = = = = = = = = = = = = =
INPUT : OUTPUT
= = = = = = = = = = = = = = = = = = = = = =
0 : 00000000 00000000 00000000 00000000
2 : 00000000 00000000 00000000 00000010
1 : 00000000 00000000 00000000 00000001
1024 : 00000000 00000000 00000100 00000000
512 : 00000000 00000000 00000010 00000000
64 : 00000000 00000000 00000000 01000000
= = = = = = = = = = = = = = = = = = = = = =
Thus, I know that binary_print () converts decimal to binary.
But I don’t understand how exactly the function finds the correct answer. In particular:
- What is a mask? How did the author come to the value 0xff000000? (0xff000000 seems to be approaching 2 ^ 32, the maximum int value in the system)
- What is a shift? Why initialize it to 256 ^ 3? (It seems to me that this has something to do with the weight of the space in hexadecimal format)
- What actually happens on these lines:
- byte = (value and mask) / shift
- byte and 0x80
In short, I would like to understand the binary_print () conversion method used.