Integer overflow with UDL (user literal) for __int128 @ minus negative value

For clarity and simplicity, I will abbreviate the following numbers as follows:

  • โˆ’170,141,183,460,469,231,731,687,303,715,884,105,728 as -170โ€ฆ728
  • 170,141,183,460,469,231,731,687,303,715,884,105,727 as 170โ€ฆ727

These numbers represent the minimum and maximum values โ€‹โ€‹of a 128-bit signed integer (__int128 in gcc ).

I implemented custom literals (raw literals) for this data type, since gcc does not offer a way to define constants of this type: _u128 for unsigned __int128 and _i128 for __int128 .

The minus symbol is not part of the UDL, but the unary minus operator applied to the result of the UDL.

So, for a -ddddd_i128 (where d is a digit) UDL computes the signed __int128 with a positive ddddd , and then the compiler will apply the unary minus operator to it. So far so good.

The problem is -170โ€ฆ128_i128 (which should be a valid value for __int128 ):
UDL computes the signed __int128 positive number 170โ€ฆ128 , which is outside the range of __int128 , which results in Undefined Behavior (integer overflow of integers).

Any solution for representing this number constant with UDL?


My UDLs are declared (just non-conference, loop version at the moment) (they are raw literals):

 unsigned __int128 operator"" _u128(char const *str); __int128 operator"" _i128(char const *str); 

Some ways to use:

  1000000000000000000000000000000000_i128 -1000000000000000000000000000000000_i128 -170141183460469231731687303715884105728_i128 // <-- this has UB 170141183460469231731687303715884105727_u128 340282366920938463463374607431768211455_u128 

I know that there are ways to determine the constant -170โ€ฆ728 various ways, such as bit shifts, mathematical operations, but I want to be able to create them sequentially, for example. I do not want this situation: you can create any constant using this UDL, with the exception of -170โ€ฆ728_i128 , for which you need to use additional tricks.

+7
c ++ c ++ 11 integer-overflow user-defined-literals int128
source share
1 answer

This is essentially the same problem as developers when implementing <limits.h> : INT_MIN cannot be defined (on a typical 32-bit system) as -2147483648 . It can be (and usually) defined as (-2147483647 - 1) . You will need to do something similar. There can be no way to represent the largest negative number using a single negation operator and a literal, but this is normal: it just doesn't need to.

+6
source share

All Articles