In C, how do I calculate the signed difference between two 48-bit unsigned integers?

I have two values ​​from an unsigned 48-bit nanosecond counter that can turn around.

I need a nanosecond difference two times.

I think I can assume that the evidence was taken at about the same time, so from the two possible answers, I think that I take the smallest amount safely.

Both of them are saved as uint64_t . Because I do not think that I can have 48-bit types.

I would like to calculate the difference between the two as a signed integer (presumably int64_t ), taking into account the wrapping.

those. if i start with

 x=5 y=3 

then the result of xy is 2 and will remain so if I increment both x and y even when they wrap the top of the maximum value 0xffffffffffff

Similarly, if x = 3, y = 5, then xy is -2 and will remain the same if x and y increase simultaneously.

If I could declare x , y as uint48_t , and the difference as int48_t , then I think

 int48_t diff = x - y; 

will work.

How to simulate this behavior with the 64-bit arithmetic I got?

(I think that any computer on which it is likely to work will use arithmetic with two additions)

PS I can probably hack this, but I wonder if there is a good standard way to do things that the next person can read my code about.

PPS In addition, this code will end in the narrowest of the hard loops, so something that will compile efficiently would be nice, so if there is a choice, it will read trump cards.

+8
c math unsigned signed
source share
3 answers

You can simulate an unsigned 48-bit integer type by simply masking the top 16 bits of uint64_t after any arithmetic operation. So, for example, to separate these two times, you could do:

 uint64_t diff = (after - before) & 0xffffffffffff; 

You will get the correct value even if the counter is wrapped during the procedure. If the counter does not turn around, camouflage is not needed, but it is also not harmful.

Now, if you want this difference to be recognized by your subscriber as a signed integer, you must sign expand the 48th bit. This means that if the 48-bit bit is set, the number is negative, and you want to set the 49th bit to the 64th bit of your 64-bit integer. I think an easy way to do this is:

 int64_t diff_signed = (int64_t)(diff << 16) >> 16; 

Warning:. You should probably check this to make sure that it works, and also beware of the implementation-specific implementation when I drop uint64_t into int64_t , and I think there is behavior defined by the implementation when I shift the signed negative number to the right. I am sure that a C lawyer could do something more robust.

Update: The OP indicates that if you combine the operation taking into account the difference and expand the sign, there is no need to mask. It will look like this:

 int64_t diff = (int64_t)(x - y) << 16 >> 16; 
+5
source share
 struct Nanosecond48{ unsigned long long u48 : 48; // int res : 12; // just for clarity, don't need this one really }; 

Here we use only an explicit field width of 48 bits, and with this (admittedly somewhat inconvenient) type that you use for your compiler to correctly handle different architectures / platforms / whatnot.

As below:

 Nanosecond48 u1, u2, overflow; overflow.u48 = -1L; u1.u48 = 3; u2.u48 = 5; const auto diff = (u2.u48 + (overflow.u48 + 1) - u1.u48) & 0x0000FFFFFFFFFFFF; 

Of course, in the last statement, you can just do the remainder operation with % (overflow.u48 + 1) if you want.

+3
source share

Do you know what was earlier reading and what was later? If yes:

 diff = (earlier <= later) ? later - earlier : WRAPVAL - earlier + later; 

where WRAPVAL - (1 << 48) is pretty easy to read.

+2
source share

All Articles