The best way to answer your question is to run tests (both random and distributed over the range?), And see if the resulting numbers in the binary representation match.
Note that one of the problems you will have if you do this is that your functions will not work for the value > MAX_INT/2 , due to the way you average the code.
avg = (x1+x2)/2 # clobbers numbers > MAX_INT/2 avg = 0.5*x1 + 0.5*x2 # no clobbering
This is almost certainly not a problem if you are not writing a library at the language level. And if most of your numbers are small, it may not matter at all? In fact, this is probably not worth considering, since the dispersion value will exceed MAX_INT , since it is an integral quadratic value; I would say that you can use standard deviation, but nobody does.
Here I am doing some experiments in python (which, I think, supports IEEE regardless of the fact that it possibly delegates math to C libraries ...):
>>> def compare(numer, denom): ... assert ((numer/denom)*2).hex()==((2*numer)/denom).hex() >>> [compare(a,b) for a,b in product(range(1,100),range(1,100))]
No problem, I think, because division and multiplication by 2 are well represented in binary format. However, try to multiply and divide by 3:
>>> def compare(numer, denom): ... assert ((numer/denom)*3).hex()==((3*numer)/denom).hex(), '...' Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <listcomp> File "<stdin>", line 2, in compare AssertionError: 0x1.3333333333334p-1!=0x1.3333333333333p-1
Perhaps this matters a lot? Perhaps if you work with very small numbers (in this case you can use log arithmetic ). However, if you work with large numbers (which is rare in probability) and you postpone the division, you, as I mentioned, overflowed the risk, but even worse, the risk of errors due to hard-to-read code .