CHOLMOD in Java

I already asked something like that, but this time I will be more specific.

In the for loop, I need to execute the Cholesky facies usually a large positive definite symmetry matrix (about 1000x1000 ). Now, to do this, I try:

1) Apache Math Library

2) Parallel Colt Library

3) JLapack library

In any of the three cases mentioned above, the time consumption is very large when compared with MATLAB, for example.

So I'm wondering if there is any highly optimized external tool for Cholesky factorization in Java: I thought, for example, about CHOLMOD , which is actually internally called inside MATLAB and other tools.

I would really like for you to have full feedback on this.

+4
java matrix-decomposition
Jun 11 '13 at 14:26
source share
2 answers

Here is a good summary of some BLAS libraries for Java: performance-of-java-matrix-math-libraries . You can also see a test of many of the many of these libraries in Java-Matrix-Benchmark] ( https://lessthanoptimal.imtqy.com/Java-Matrix-Benchmark/ ).

However, most of these libraries do not seem to be configured to solve large sparse matrices in my experience. In my case, I implemented the solution using Eigen via JNI.

Eigen discusses his linear solvers well , including one on CHOLMOD.

For my case, for a sparse matrix 8860x8860 using Eigen solver via JNI it was 20 times faster than a parallel stallion and 10 times faster than my own dense solver. More importantly, it looks like n^2 , not n^3 , and it uses much less memory than my tight solver (I run out of memory).

In fact, there is a wrapper for Eigen with Java called JEigen that uses JNI. However, he did not implement the allowed matrix solution, so he does not wrap everything.

I originally used JNA, but was not happy with the overhead. Wikipedia is a good example of how to use JNI . After writing function declarations and compiling them using javac you use javah to create a header file for C ++.

For example, for

 //Cholesky.java package cfd.optimisation; //ri, ci, v : matrix row indices, column indices, and values //y = Ax where A is a nxn matrix with nnz non-zero values public class Cholesky { private static native void solve_eigenLDLTx(int[] ri, int[] ci, double[] v, double[] x, double[] y, int n, int nnz); } 

javah creates the header file cfd_optimization_Cholesky.h with the declaration

 JNIEXPORT void JNICALL Java_cfd_optimisation_Cholesky_solve_1eigenLDLTx (JNIEnv *, jclass, jintArray, jintArray, jdoubleArray, jdoubleArray, jdoubleArray, jint, jint); 

And this is how I implemented the solver

 JNIEXPORT void JNICALL Java_cfd_optimisation_Cholesky_solve_1eigenLDLTx(JNIEnv *env, jclass obj, jintArray arrri, jintArray arrci, jdoubleArray arrv, jdoubleArray arrx, jdoubleArray arry, jint jn, jint jnnz) { int n = jn; int *ri = (int*)env->GetPrimitiveArrayCritical(arrri, 0); int *ci = (int*)env->GetPrimitiveArrayCritical(arrci, 0); double *v = (double*)env->GetPrimitiveArrayCritical(arrv, 0); int nnz = jnnz; double *x = (double*)env->GetPrimitiveArrayCritical(arrx, 0); double *y = (double*)env->GetPrimitiveArrayCritical(arry, 0); Eigen::SparseMatrix<double> A = colt2eigen(ri, ci, v, nnz, n); //Eigen::MappedSparseMatrix<double> A(n, n, nnz, ri, ci, v); Eigen::VectorXd a(n), b(n); for (int i = 0; i < n; i++) a(i) = x[i]; //a = Eigen::Map<Eigen::VectorXd>(x, n).cast<double>(); Eigen::SimplicialCholesky<Eigen::SparseMatrix<double> > solver; solver.setMode(Eigen::SimplicialCholeskyLDLT); b = solver.compute(A).solve(a); for (int i = 0; i < n; i++) y[i] = b(i); env->ReleasePrimitiveArrayCritical(arrri, ri, 0); env->ReleasePrimitiveArrayCritical(arrci, ci, 0); env->ReleasePrimitiveArrayCritical(arrv, v, 0); env->ReleasePrimitiveArrayCritical(arrx, x, 0); env->ReleasePrimitiveArrayCritical(arry, y, 0); } 

The colt2eigen function creates a sparse matrix of two integer arrays containing row and column indices, and a double array of values.

 Eigen::SparseMatrix<double> colt2eigen(int *ri, int *ci, double* v, int nnz, int n) { std::vector<Eigen::Triplet<double>> tripletList; for (int i = 0; i < nnz; i++) { tripletList.push_back(Eigen::Triplet<double>(ri[i], ci[i], v[i])); } Eigen::SparseMatrix<double> m(n, n); m.setFromTriplets(tripletList.begin(), tripletList.end()); return m; } 

One of the hardest parts was getting these arrays from Java and Colt. Do it i made it

 //y = A x: x and y are double[] arrays and A is DoubleMatrix2D int nnz = A.cardinality(); DoubleArrayList v = new DoubleArrayList(nnz); IntArrayList ci = new IntArrayList(nnz); IntArrayList ri = new IntArrayList(nnz); A.forEachNonZero((row, column, value) -> { v.add(value); ci.add(column); ri.add(row); return value;} ); Cholesky.solve_eigenLDLTx(ri.elements(), ci.elements(), v.elements(), x, y, n, nnz); 
+4
May 29 '15 at 9:30
source share

I haven’t worked with any of these tools, but my suspicion is that you end up because Java doesn’t use the native floating point root instruction in some versions / on some platforms.

See: Where can I find the source code for the Java square root function?

If absolute precision is not a requirement, you can try switching one of the aforementioned implementations to using the square root approximation (see Fast sqrt in Java due to precision for sentences), which should be somewhat faster.

+1
Jun 11 '13 at 14:42 on
source share



All Articles