I ran into some code that has the bit mask 0xffand 0xff00or 16-bit binary form 00000000 11111111, and 11111111 00000000.
public static boolean isStringCompressed(String inString)
{
try
{
byte[] bytes = inString.getBytes("ISO-8859-1");
int gzipHeader = ((int) bytes[0] & 0xff)
| ((bytes[1] << 8) & 0xff00);
return GZIPInputStream.GZIP_MAGIC == gzipHeader;
} catch (Exception e)
{
return false;
}
}
I am trying to figure out what the purpose of using these bit masks is in this context (against an array of bytes). I don’t understand, what's the difference?
In the context of the GZip compressed string, since this method appears to be written for the GZip magic number, it 35615is 8B1Fin Hex and 10001011 00011111in binary format.
Do I understand correctly that this will swap bytes? For example, for example, my input line\u001f\u008b
bytes[0] & 0xff00
bytes[0] = 1f = 00011111
& ff = 11111111
--------
= 00011111
bytes[1] << 8
bytes[1] = 8b = 10001011
<< 8 = 10001011 00000000
((bytes[1] << 8) & 0xff00)
= 10001011 00000000 & 0xff00
= 10001011 00000000
11111111 00000000 &
10001011 00000000
So,
00000000 00011111
10001011 00000000 |
-----------------
10001011 00011111 = 8B1F
It seems to me that &it does nothing with the original byte in both cases bytes[0] & 0xffand (bytes[1] << 8) & 0xff00). What am I missing?
source
share