Let's break it, we will take one step at a time.
void CryptoBuffer(unsigned char *Buffer, unsigned short length) { unsigned short i; for(i=0; i < length; i++) { *Buffer ^= 0xAA; *Buffer++ += 0xC9; } }
Regardless of some other notes, this is how you usually do it in C / C ++. There is nothing unusual in this code, and it is not too difficult, but I think itβs good to break it to show you what is happening.
Notes:
- unsigned char basically matches byte in C #
- unsigned length has a value between 0-65536. Int should do the trick.
- Buffer has post increment
- Byte assignment (+ = 0xC9) will overflow. If it overflows it, it is truncated to 8 bits in this case.
- The buffer is passed through ptr, so the pointer in the calling method will remain the same.
- This is just basic C code, not C ++. It is perfectly safe to assume that people do not use operator overloading here.
The only "hard" thing here is Buffer ++. Details can be found in Sutter's Exceptional C ++ book, but a small example also explains this. And, fortunately, we have a great example at our disposal. literal translation of the above code:
void CryptoBuffer(unsigned char *Buffer, unsigned short length) { unsigned short i; for(i=0; i < length; i++) { *Buffer ^= 0xAA; unsigned char *tmp = Buffer; *tmp += 0xC9; Buffer = tmp + 1; } }
In this case, the temporary variable can be solved trivially, which leads to:
void CryptoBuffer(unsigned char *Buffer, unsigned short length) { unsigned short i; for(i=0; i < length; i++) { *Buffer ^= 0xAA; *Buffer += 0xC9; ++Buffer; } }
Changing this code to C # is pretty simple right now:
private void CryptoBuffer(byte[] Buffer, int length) { for (int i=0; i<length; ++i) { Buffer[i] = (byte)((Buffer[i] ^ 0xAA) + 0xC9); } }
This is basically the same as your ported code. This means that somewhere along the way, something went wrong ... So, let me hack the cryptobuffer, right ?:-)
If we assume that the first byte is not used (as you said) and that β0xAAβ and / or β0xC9β are wrong, we can just try all the combinations:
static void Main(string[] args) { byte[] orig = new byte[] { 0x03, 0x18, 0x01 }; byte[] target = new byte[] { 0x6F, 0x93, 0x8b }; for (int i = 0; i < 256; ++i) { for (int j = 0; j < 256; ++j) { bool okay = true; for (int k = 0; okay && k < 3; ++k) { byte tmp = (byte)((orig[k] ^ i) + j); if (tmp != target[k]) { okay = false; break; } } if (okay) { Console.WriteLine("Solution for i={0} and j={1}", i, j); } } } Console.ReadLine(); }
There we go: oops no solutions. This means that cryptobuffer is not doing what you think it is doing, or part of the C code is missing here. F.ex. Do they really pass the "Buffer" to the CryptoBuffer method, or do they change the pointer earlier?
In conclusion, I believe that the only good answer is that there is no critical information to address this issue.