An effective way to determine the determinant n! xn! matrix in Maple

I have a big matrix, n! xn !, for which I need to take the qualifier. For each permutation n, we associate

  • vector of length 2n (this is easy to calculate)
  • polynomial in 2n variables (product of linear factors calculated recursively on n)

The matrix is ​​an evaluation matrix of polynomials in vectors (considered points). Thus, the sigma, tau matrix entry (indexed by permutations) is a polynomial for sigma estimated in the vector for tau.

Example : for n=3 , if the ith polynomial (x1 - 4)(x3 - 5)(x4 - 4)(x6 - 1) , and the j point (2,2,1,3,5,2) , then the (i,j) th element of the matrix will be (2 - 4)(1 - 5)(3 - 4)(2 - 1) = -8 . Here n=3 , so the points are in R^(3!) = R^6 , and the polynomials have 3!=6 variables.

My goal is to determine if the matrix is ​​nonsingular.


My approach right now is this:

  • point function takes a permutation and prints a vector
  • the poly function takes a permutation and produces a polynomial
  • the nextPerm function gives the next lexicographic permutation

An abbreviated version of the pseudocode of my code is this:

 B := []; P := []; w := [1,2,...,n]; while w <> NULL do B := B append poly(w); P := P append point(w); w := nextPerm(w); od; // BUILD A MATRIX IN MAPLE M := Matrix(n!, (i,j) -> eval(B[i],P[j])); // COMPUTE DETERMINANT IN MAPLE det := LinearAlgebra[Determinant]( M ); // TELL ME IF IT NONSINGULAR if det = 0 then return false; else return true; fi; 

I work in Maple using the built-in LinearAlgebra[Determinant] function, but everything else is a custom function using low-level Maple functions (like seq , convert and cat ).

My problem is that it takes too much time and I can endure patience n=7 , but getting n=8 takes a few days. Ideally, I want to be able to get to n=10 .

Does anyone have an idea how I could improve the time? I am open to work in another language, for example. Matlab or C, but would rather find a way to speed it up in Maple.

I understand that it can be difficult to answer without any details, but the code for each function, for example. point and poly are already optimized, so the real question is whether there is a faster way to take the determinant by building the matrix "on the fly" or something like that.


UPDATE: Here are two ideas that I played with that don't work:

  • I can store polynomials (since they take time to compute, I don’t want to repeat this if I can help) into a vector of length n! , and calculate the points on the fly, and insert these values ​​into the permutation formula for the determinant:

    permutation determinant formula

    The problem is that this is the size of O(N!) In the size of the matrix, so for my case it will be O((n!)!) . When n=10 , (n!)! = 3,628,800! (n!)! = 3,628,800! that is a way for large ones to even consider doing.

  • Calculate the determinant using the LU decomposition. Fortunately, the main diagonal of my matrix is ​​non-zero, so this is possible. Since the size of O(N^3) is equal to the size of the matrix, this becomes O((n!)^3) , which is much closer to feasibility. The problem, however, is that it requires me to store the entire matrix, which creates a serious memory load, no matter on time. So that won't work either, at least not without a little smarter. Any ideas?

+6
source share
2 answers

I don’t understand your problem in space or time. Obviously, they trade back and forth. If you only want to know if the determinant is positive or not, then you should definitely go with LU decomposition. The reason is that if A = LU with L lower triangular and U upper triangular, then

 det(A) = det(L) det(U) = l_11 * ... * l_nn * u_11 * ... * u_nn 

therefore, you only need to determine if any of the main diagonal entries are L or U 0 .

For simplicity, use the Doolitt algorithm, where l_ii = 1 . If at any moment the algorithm breaks, the matrix is ​​singular, so you can stop. Here's the gist:

 for k := 1, 2, ..., n do { for j := k, k+1, ..., n do { u_kj := a_kj - sum_{s=1...k-1} l_ks u_sj; } for i = k+1, k+2, ..., n do { l_ik := (a_ik - sum_{s=1...k-1} l_is u_sk)/u_kk; } } 

The key is that you can calculate the ith row of U and the ith column of L at the same time, and you only need to know the previous row / column to move forward.Thus, you are doing the parallel process as much as you can, and store as much , How many do you need. Since you can compute a_ij records as needed, this requires that you save two vectors of length n , creating two more vectors of length n (rows U , columns L ). The algorithm takes time n^2 . You may be able to find some more tricks, but it depends on your compromise space / time.

+2
source

Not sure I followed your issue; is this (or is it reduced to) the following?

You have two vectors of n numbers, name them x and c , then the matrix element is the product over k of (x_k+c_k) , and each row / column corresponds to different orderings x and c ?

If so, then I believe that the matrix will be singular if there are duplicate values ​​in the values ​​of x or c , since then the matrix will have repeating rows / columns. Try a Monte Carlo heap on smaller n with different values ​​of x and c to see if this case is nonsingular at all - it is likely if this is true for 6, this will be true for 10.

As for brute force, then your method:

  • Is not a starter
  • It will work much faster (it should be a few seconds for n=7 ), although you can try SVD instead of LU, which will greatly improve the work, letting you know how well your matrix behaved.
-1
source

All Articles