Can you calculate the inaccurate calculation of Jacobi in Julia without special arrays?

In Julia, you wanted to calculate an inaccurate Jacobian based on the vector function f (x), which requires a lot of calculations. The evaluation of the Jacobian is obviously neat in concept. My question is: can this be done in Julia without resorting to DistributedArray, SharedArray, etc.?

For example, suppose you have code:

function Jacob(f::Function,x) eps=1e-7 delta=eps*eye(length(x)) J=zeros(length(x),length(x)) for i=1:length(x) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps end J end 

Is it possible to parallelize this the same way you could parallelize the summation of 2,000,000,000 random coin flags according to the manual? That is, something equivalent

 nheads = @parallel (+) for i=1:200000000 int(randbool()) end 

I tried this:

 function Jacob(f::Function,x) require("testfunc.jl"); eps=1e-7 delta=eps*eye(length(x)) J=zeros(length(x),length(x)) J=@parallel (+) for i=1:length(x) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps J end J end 

where "testfunc.jl" is the name of the file in which this code is located and the definition of f itself. When I tried it, when I just rated x. ^ 2 + cos (x), I managed to get the correct (diagonal) matrix, but the values โ€‹โ€‹did not match the values โ€‹โ€‹given by the non-parallel code (which I can confirm the correctness of the values). Further research suggests that as a result, the Jacobian has some values โ€‹โ€‹multiplied by 2 or 3 when using julia -p 4.

Is the approach that I described plausible (and just requires customization to prevent duplicate ratings)? If not, is there another way I can evaluate the Jacobian without using more complex special types of arrays?

It seems that adding "J = zeros (n, n)" as the first operation inside the parallel for loop fixes this duplication problem. Is it possible to do the same without resorting to such cleaning brute force from the array J?

+8
parallel-processing julia-lang
source share
1 answer

What I understand from the above code is that when you write:

  J=zeros(length(x),length(x)) J=@parallel (+) for i=1:length(x) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps J end 

Julia sends a copy of J to a new process, then evaluates f(x) and summarizes the results together. I think the best and most efficient way is to prevent J from sending between threads and do the following:

  @parallel (+) for i=1:length(x) J=zeros(length(x),length(x)) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps J end 

Using the code above, each thread works with a new J , and so summing returns the correct answer.

+1
source share

All Articles