I am trying to rewrite fmincon Matlab optimization function in Julia.
Here is the Matlab code:
function [x,fval] = example3() x0 = [0; 0; 0; 0; 0; 0; 0; 0]; A = []; b = []; Ae = [1000 1000 1000 1000 -1000 -1000 -1000 -1000]; be = [100]; lb = [0; 0; 0; 0; 0; 0; 0; 0]; ub = [1; 1; 1; 1; 1; 1; 1; 1]; noncon = []; options = optimset('fmincon'); options.Algorithm = 'interior-point'; [x,fval] = fmincon(@objfcn,x0,A,b,Ae,be,lb,ub,@noncon,options); end function f = objfcn(x) % user inputs Cr = [ 0.0064 0.00408 0.00192 0; 0.00408 0.0289 0.0204 0.0119; 0.00192 0.0204 0.0576 0.0336; 0 0.0119 0.0336 0.1225 ]; w0 = [ 0.3; 0.3; 0.2; 0.1 ]; Er = [0.05; 0.1; 0.12; 0.18]; % calculate objective function w = w0+x(1:4)-x(5:8); Er_p = w'*Er; Sr_p = sqrt(w'*Cr*w); % f = objective function f = -Er_p/Sr_p; end
and here is my Julia code:
using JuMP using Ipopt m = Model(solver=IpoptSolver())
Julia optimization works when I explicitly define the objective function in @setNLObjective, however this is not suitable as user input may change, which leads to another objective function, which you can see from how the objective function is created.
The problem seems to be a limitation on the JuMP method of how an object function can be introduced into the @setNLObjective argument:
All expressions should be simple scalar operations. You cannot use point, matrix-vector products, vector slices, etc. Translate vector operations into explicit sums of {} operations.
Is there any way around this? Or are there any other packages in Julia that solve this, bearing in mind that I will not have Jacobian or Hessian.
Thank you very much.
source share