I am trying to write an implementation of the Wilson spectral density factorization algorithm [1] for Python. The algorithm iteratively factorizes the matrix function [QxQ] into its square root (this is a kind of extension of the Newton-Raphson square root finder for spectral density matrices).
The problem is that my implementation only converges for matrices of size 45x45 and smaller. Thus, after 20 iterations, the total square of the difference between the matrices is about 2.45e-13. However, if I make an input of size 46x46, it does not converge until the 100th or so iteration. For 47x47 or more matrices never converge; the error ranges from 100 to 1000 for about 100 iterations, and then starts to grow very quickly.
How are you going to try to debug something like this? It seems there is no specific point at which she went crazy, and the matrices are too large for me to actually try to do the calculation manually. Does anyone have any tips / tutorials / heuristics for finding fancy numerical errors like this?
I have never encountered anything like this before, but I hope some of you ...
Thanks, Dan
[1] G. T. Wilson. "Factorization of matrix spectral densities." SIAM J. Appl. Mathematics (Vol. 23, No. 4, December 1972)
source
share