Take a look at the following C # code (a function extracted from the BuildProtectedURLWithValidity function at http://wmsauth.org/examples ):
byte[] StringToBytesToBeHashed(string to_be_hashed) { byte[] to_be_hashed_byte_array = new byte[to_be_hashed.Length]; int i = 0; foreach (char cur_char in to_be_hashed) { to_be_hashed_byte_array[i++] = (byte)cur_char; } return to_be_hashed_byte_array; }
My question is: What is different from a byte from a char in terms of encoding?
I think this really does nothing in terms of encoding, but does this mean that Encoding.Default is the one that is used, and therefore the byte for return will depend on how the infrastructure will encode the main string in a particular operating system ?
And besides, is char really more than a byte (I guess 2 bytes) and actually omits the first byte?
I was thinking of replacing all of this:
Encoding.UTF8.GetBytes(stringToBeHashed)
What do you think?
Mariano desanze
source share