The difference in the endpoints is that NumPy computes the length in front of the front, not the ad hoc, because it needs to pre-allocate the array. This can be seen in the _calc_length helper . Instead of stopping when it reaches the final argument, it stops when it reaches a given length.
Calculating the front length does not save you the trouble of an integer step, and you will often get the “wrong” endpoint anyway, for example, numpy.arange(0.0, 2.1, 0.3) :
In [46]: numpy.arange(0.0, 2.1, 0.3) Out[46]: array([ 0. , 0.3, 0.6, 0.9, 1.2, 1.5, 1.8, 2.1])
It is much safer to use numpy.linspace , where instead of the step size you say how many elements you want and whether you want to include the right endpoint.
It might seem that when calculating elements, NumPy did not experience rounding errors, but simply because of different display logic. NumPy truncates displayed accuracy more aggressively than float.__repr__ . If you use tolist to get the usual list of regular Python scanners (and therefore the usual logic for displaying float ), you can see that NumPy also received a rounding error:
In [47]: numpy.arange(0, 1, 0.1).tolist() Out[47]: [0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6000000000000001, 0.7000000000000001, 0.8, 0.9]
He suffered a slightly different rounding error - for example, in .6 and .7 instead of .8 and .9 - because he also uses another element calculation tool implemented in the fill function for the corresponding dtype.
The implementation of the fill function has the advantage that it uses start + i*step instead of repeatedly adding a step, which avoids the accumulation of errors with each addition. However, it has the disadvantage that (for some convincing reason, I don’t see it), it is reconsidering the step from the first two elements instead of taking the step as an argument, so it may lose a lot of precision at the beginning.