High precision arithmetic in Python and / or C / C ++?

Summary: Which Python or C-Library package is the best option for arithmetic operations with high precision?

I have some functions that convert fractional days ( 0.0-0.99999.. ) to a human-readable format (hours, minutes, seconds, but more importantly: milliseconds, microseconds, nanoseconds).

The conversion is carried out using the following functions: (note that I have not yet performed the correction of the time zone)

 d = lambda x: decimal.Decimal(str(x)) cdef object fractional2hms(double fractional, double timezone): cdef object total, hms, ms_mult cdef int i hms = [0,0,0,0,0,0] ms_mult = (d(3600000000000), d(60000000000), d(1000000000), d(1000000), d(1000), d(1)) # hms = [0,0,0,0,0] total = d(fractional) * d(86400000000000) for i in range(len(ms_mult)): hms[i] = (total - (total % ms_mult[i])) / ms_mult[i] total = d(total % ms_mult[i]) return ([int(x) for x in hms]) 

And for fractional:

 def to_fractional(self): output = (self.hour / d(24.0)) + (self.minute / d(1440.0)) output += (self.second / d(86400.0)) + (self.millisecond / d(86400000.0)) output += self.microsecond / d(86400000000.0) output += self.nanosecond * (d(8.64) * d(10)**d(-9)) return output 

My inverse conversion results are inaccurate, however:

 jdatetime.DayTime.fromfractional(d(0.567784356873)).to_fractional() Decimal('0.56779150214342592592592592592592592592592592592592592592592592592592592592592592592592592592592592592592592592592') # Difference in-out: Decimal('0.000007145270') 

When I modify d() to return a regular Python float:

 # Difference in-out: 7.1452704258900823e-06 (same) 

So my question is: which Python package or C library can do this more accurately?

+1
source share
2 answers

The difference is due to an error in your code, and not due to any error. Line

 output += self.nanosecond * (d(8.64) * d(10)**d(-9)) 

should be something like

 output += self.nanosecond / d(86400000000000) 

In addition, Bad Idea uses floating point literals in your code and converts them to Decimal . This first rounds the literal number to floating point precision. A later conversion to Decimal cannot restore lost accuracy. Try

 d = decimal.Decimal 

and use only whole literals (just remove the .0 part).

+4
source

CTRL-F "Libraries" there: Arbitrary-precision_arithmetic

EDIT: extracting from reference libraries only for C ++ and python (and removing some that don't have floating numbers, but only integers)

python

1) mpmath


C ++

1) apfloat

2) base class number

3) bigfloat

4) lidia

5) mapm

6) MIRACL

7) NTL

8) ttmath

+2
source

All Articles