In Julia, you wanted to calculate an inaccurate Jacobian based on the vector function f (x), which requires a lot of calculations. The evaluation of the Jacobian is obviously neat in concept. My question is: can this be done in Julia without resorting to DistributedArray, SharedArray, etc.?
For example, suppose you have code:
function Jacob(f::Function,x) eps=1e-7 delta=eps*eye(length(x)) J=zeros(length(x),length(x)) for i=1:length(x) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps end J end
Is it possible to parallelize this the same way you could parallelize the summation of 2,000,000,000 random coin flags according to the manual? That is, something equivalent
nheads = @parallel (+) for i=1:200000000 int(randbool()) end
I tried this:
function Jacob(f::Function,x) require("testfunc.jl"); eps=1e-7 delta=eps*eye(length(x)) J=zeros(length(x),length(x)) J=@parallel (+) for i=1:length(x) J[:,i]=(f(x+delta[:,i])-f(x-delta[:,i]))/2/eps J end J end
where "testfunc.jl" is the name of the file in which this code is located and the definition of f itself. When I tried it, when I just rated x. ^ 2 + cos (x), I managed to get the correct (diagonal) matrix, but the values โโdid not match the values โโgiven by the non-parallel code (which I can confirm the correctness of the values). Further research suggests that as a result, the Jacobian has some values โโmultiplied by 2 or 3 when using julia -p 4.
Is the approach that I described plausible (and just requires customization to prevent duplicate ratings)? If not, is there another way I can evaluate the Jacobian without using more complex special types of arrays?
It seems that adding "J = zeros (n, n)" as the first operation inside the parallel for loop fixes this duplication problem. Is it possible to do the same without resorting to such cleaning brute force from the array J?