As part of some unit testing code that I write, I wrote the following function. The purpose of this is to determine whether it is possible to round “a” to “b”, regardless of how accurate “a” or “b” is.
def couldRoundTo(a,b): """Can you round a to some number of digits, such that it equals b?""" roundEnd = len(str(b)) if a == b: return True for x in range(0,roundEnd): if round(a,x) == b: return True return False
Here is some function output:
>>> couldRoundTo(3.934567892987, 3.9) True >>> couldRoundTo(3.934567892987, 3.3) False >>> couldRoundTo(3.934567892987, 3.93) True >>> couldRoundTo(3.934567892987, 3.94) False
As far as I can tell, this works. However, I am afraid to rely on this, given that I do not have a perfect understanding of issues regarding floating point precision. Can someone tell me if this is suitable for implementing this feature? If not, how can I improve it?
python floating-point rounding
Wilduck
source share