The reason python's behavior is related to the way the floating point numbers are stored on the computer and the standardized rounding rules defined by IEEE that define the standard number formats and mathematical operations used on almost all modern computers.
The need to efficiently store binary numbers on a computer has leading computers for using floating point numbers. These numbers are easy to process by processors, but have the disadvantage that many decimal numbers cannot be accurately represented . This leads to the fact that the numbers are sometimes slightly behind what we think should be.
The situation becomes a little clearer if we expand the values ββin Python, rather than trimming them:
>>> print('%.20f' % -67.6640625) -67.66406250000000000000 >>> print('%.20f' % -67.6000625) -67.60006250000000704858
So, as you can see, -67.6640625 is a number that can be accurately represented, but -67.6000625 not, it is actually a bit larger. The default rounding mode defined by the IEEE string for floating point numbers says that anything above 5 should be rounded, everything below should be rounded down. Thus, for the case of -67.6000625 it is really 5 plus a small amount, so it is rounded up. However, in the case of -67.6640625 it is exactly equal to five, so the tie-break rule comes into effect. The tie-break rule is rounded to the nearest even number by default. Since 2 is the closest event number, it is rounded to two.
So, Python is following the approach recommended by the floating point standard. So the question is why your version of MATLAB does not. I tried this on my computer with 64-bit MATLAB R2016a, and got the same result as in Python:
>> fprintf(1,'%f', -67.6640625) -67.664062>>
So, it seems that MATLAB at some point used a different rounding approach (perhaps a non-standard approach, perhaps one of the alternatives specified in the standard), and has since switched to the same rules as all the others.