In principle, “saturation” means that values outside a certain “max” value are set to “max”, and values below the “min” value are set to “min”. Typically, "min" and "max" are values suitable for some type of data.
Thus, for example, if you take arithmetic for unsigned bytes, "128 + 128" should be "256" (this is hexadecimal 0x100), which does not fit in bytes. Regular integer arithmetic will create an overflow and discard the part that does not fit, which means "128 + 128 → 0". With rich arithmetic "256> 255", so the result is 255.
Another option would be scaling, which basically “compresses” the values to a smaller range. Saturation just turns them off.
You can also use this to put larger types into smaller ones, for example, put 16-bit values in 8-bit values. Your example most likely does just that, although you probably know better than I do what types you deal with.
"UnsignedSaturation" most likely has a minimum of "0" and "max" of any maximum value of the result type. Thus, negative inputs turn into "0".
source share