Left eigenvectors that do not give the correct (Markov) stationary probability in scipy

Given the following Markov matrix:

import numpy, scipy.linalg A = numpy.array([[0.9, 0.1],[0.15, 0.85]]) 

A stable probability exists and is equal to [.6, .4] . This is easy to check if you take a large matrix power:

 B = A.copy() for _ in xrange(10): B = numpy.dot(B,B) 

Here B[0] = [0.6, 0.4] . So far, so good. According to wikipedia :

The stationary probability vector is defined as a vector that does not change when applying the transition matrix; those. it is defined as the left eigenvector of the probability matrix associated with the eigenvalue 1:

Therefore, I would have to calculate the eigen left eigenvector A with the eigenvalue 1, which should also give me the stationary probability. The exclusive eig implementation has a left keyword:

 scipy.linalg.eig(A,left=True,right=False) 

gives:

 (array([ 1.00+0.j, 0.75+0.j]), array([[ 0.83205029, -0.70710678], [ 0.5547002 , 0.70710678]])) 

Which says the dominant left eigenvector: [0.83205029, 0.5547002] . Am I reading this wrong? How to get [0.6, 0.4] using the expansion of eigenvalues?

+4
source share
1 answer

[0.83205029, 0.5547002] just [0.6, 0.4] times 1.39.

Although from a “physical” point of view you need an eigenvector with the sum of its components equal to 1, scaling the eigenvector by some factor does not change its “property” :

If \ vec {v} A = \ lambda \ vec {v} then obviously (\ alpha \ vec {v}) A = \ lambda (\ alpha \ vec {v})

So, to get [0.6, 0.4] , you have to do:

 >>> v = scipy.linalg.eig(A,left=True,right=False)[1][:,0] >>> v array([ 0.83205029, 0.5547002 ]) >>> v / sum(v) array([ 0.6, 0.4]) 
+7
source

Source: https://habr.com/ru/post/1411363/


All Articles