Holding large numbers into a numpy array

I have a dataset on which I am trying to apply some arithmetic method. The thing is, this gives me relatively large numbers, and when I do this with numpy, they are stored as 0.

It's strange when I calculate appart numbers, they have int value, they only become zeros when I calculate them using numpy.

x = np.array([18,30,31,31,15])
10*150**x[0]/x[0]
Out[1]:36298069767006890

vector = 10*150**x/x
vector
Out[2]: array([0, 0, 0, 0, 0])

I did not check their types:

type(10*150**x[0]/x[0]) == type(vector[0])
Out[3]:True

How can I calculate these large numbers with numpy without seeing them at zeros?

Note that if we remove factor 10 at the onset of the problem, then there will be minor changes (but I think it could be a similar reason):

x = np.array([18,30,31,31,15])
150**x[0]/x[0]
Out[4]:311075541538526549

vector = 150**x/x
vector
Out[5]: array([-329406144173384851, -230584300921369396, 224960293581823801,
   -224960293581823801, -368934881474191033])

Negative numbers indicate that the largest numbers of type int64 in python intersected, right?

+4
2

, numpy native ctypes , , python , int . , , numpy ctypes, python. , .

In [14]: x = np.array([18,30,31,31,15], dtype=object)

In [15]: 150**x
Out[15]: 
array([1477891880035400390625000000000000000000L,
       191751059232884086668491363525390625000000000000000000000000000000L,
       28762658884932613000273704528808593750000000000000000000000000000000L,
       28762658884932613000273704528808593750000000000000000000000000000000L,
       437893890380859375000000000000000L], dtype=object)

numpy , int. , numpy, , .
, numpy , , .

, , : D
, - , .

, , float:

In [19]: x = np.array([18,30,31,31,15], dtype=np.float64)

In [20]: 150**x
Out[20]: 
array([  1.47789188e+39,   1.91751059e+65,   2.87626589e+67,
         2.87626589e+67,   4.37893890e+32])
+3

150 ** 28 , int64 ( 8e60, unsigned int64 18e18).

Python , NumPy .

, int.

+2

All Articles