100-bit floating point python

I have a large decimal number, such as the square root of 2, and I want to look at the first 100 decimal digits. However, the float does not support this feature: 1.4142135623730951454746218587388284504413604736328125000000000000000000000000000000000000000000000000

What is the best way to do this? I do not want to import anything, preferably

+6
source share
1 answer

If your accuracy requirement is 100 decimal digits, I think you need to use decimal.Decimal .

Python float is not designed for this kind of exact calculation.

Using decimal.Decimal almost as simple as float , you can do + , - , * , / and some other commonly used calculations directly between any decimal.Decimal without any consideration.

In addition, decimal.Decimal supports the adjustment of the directly required precision:

 >>> from decimal import getcontext, Decimal >>> getcontext().prec = 6 >>> Decimal(1) / Decimal(7) Decimal('0.142857') >>> getcontext().prec = 28 >>> Decimal(1) / Decimal(7) Decimal('0.1428571428571428571428571429') 

You can find more information in this Python2 API or Python3 API

+6
source

All Articles