As you can see the source, in the simplest case, without masks and N variables with M samples each, it returns the covariance matrix (N, N) calculated as:
(xm) * (xm).T.conj() / (N - 1)
Where * represents the matrix product [1]
It is implemented approximately like this:
X -= X.mean(axis=0) N = X.shape[1] fact = float(N - 1) return dot(X, XTconj()) / fact
If you want to view the source, look here instead of the link from Mr E, if you are not interested in the mask in the mask. As you mentioned, the documentation is small.
[1] which in this case is effectively (but not exactly) an external product, because (xm) has N column vectors of length M and, therefore, (xm).T is the same number of vectors. The end result is the sum of all external products. The same * will give an internal (scalar) product if the order is canceled. But, technically, these are just standard matrix multiplications, and the true external product is only the product of the column vector by the row vector.
source share