It is likely that it offers more accuracy. As mentioned in another answer, there are also advantages not to convert back and forth, which, in turn, has consequences for playing intermediate results - imagine that you have a pixel that is 1.0 in float (255 in bytes ), and you multiply by 0.65. Then convert it back to an integer, which is 166.75 - but we will round to 166. Thus, we lose 0.75. If we do one more math, the error may become larger ... And, of course, if some intermediate step makes our value go (path) above 255 (1.0) or below 0 (0.0), it is quite possible to keep the new value for some time, and then, if necessary, "fix".
Of course, you cannot store more data in less space (usually), so a typical float is 4 times the byte. Some GPUs have 16-bit floats that contain 11 bits of mantissa and 4-5 bits of exponent, which makes them useful for most types of simple pixel math. But it still doubles the size, but doesn’t double the "accuracy" (my colleague spent a lot of time trying to get the algorithm using 16-bit floats to match the results of 32-bit floats and never got there),
source share