Math error error log (det (AA ^ T) +1) in python

I am trying to estimate the average value of log (det (AA T ) + 1) in Python. My simple code works fine until I get 17 × 17 matrices, and at that moment it gives me a math error. Here is the code:

iter = 10000
for n in xrange(1,20):
    h = n
    dets = []
    for _ in xrange(iter):
        A = (np.random.randint(2, size=(h,n)))*2-1
        detA_Atranspose = np.linalg.det(np.dot(A, A.transpose()))
        try:
            logdetA_Atranspose = math.log(detA_Atranspose+1,2)
        except ValueError:
            print "Ooops!", n,detA_Atranspose
        dets.append(logdetA_Atranspose)
    print np.mean(dets)

A is assumed to be a matrix with elements that are either -1 or 1.

What am I doing wrong and how can I fix this? What is special about 17?

+4
source share
1 answer

For the formula in the header (formerly logdet (AA ^ T)):

det (AA ^ T) for some random As may simply be 0. After that, the function will fail because the log (0) is invalid.

, det (AA ^ T) , AA ^ T ( , det >= 0).

(logdet (1 + AA ^ T))

, numpy.linalg.slogdet() slogdet(1+A.dot(A.T))

:

" () .

, det . , .

+2

All Articles