NLopt SLSQP discards a good decision in favor of an older, worse solution

I solve the standard optimization problem from the Optimization of the portfolio of finances. The vast majority of the time, NLopt returns a reasonable solution. However, in rare cases, the SLSQP algorithm seems to iteratively approach the correct solution, and then for some obvious reason, it decides to return the solution from about one third of the way through an iterative process that is very clearly suboptimal. Interestingly, changing the initial vector parameter by a very small amount can fix the problem.

I managed to isolate the relatively simple working example of the behavior that I am talking about. Sorry that the numbers are a bit messy. That was the best I could do. The following code can be cut and pasted into Julia REPL and run and print the values ​​of the target function and parameters each time NLopt calls the target function. I call the optimization procedure twice. If you scroll back the result printed in the code below, you will notice that the first time the optimization procedure executes an iterative solution with the target value of the function 0.0022 , but for no apparent reason returns to a much earlier solution where the target function is 0.0007 and returns it instead . The second time I call the optimization function, I use a slightly different starting parameter vector. Again, the optimization procedure performs an iterative solution with the same good solution, but this time it returns a good solution with the target value of the function 0.0022 .

So the question is: Does anyone know why in the first case, SLSQP refuses to make a good decision in favor of the much poorer of about one third of the way through the iterative process? If so, can I fix this?

 #------------------------------------------- #Load NLopt package using NLopt #Define objective function for the portfolio optimisation problem (maximise expected return subject to variance constraint) function obj_func!(param::Vector{Float64}, grad::Vector{Float64}, meanVec::Vector{Float64}, covMat::Matrix{Float64}) if length(grad) > 0 tempGrad = meanVec - covMat * param for j = 1:length(grad) grad[j] = tempGrad[j] end println("Gradient vector = " * string(grad)) end println("Parameter vector = " * string(param)) fOut = dot(param, meanVec) - (1/2)*dot(param, covMat*param) println("Objective function value = " * string(fOut)) return(fOut) end #Define standard equality constraint for the portfolio optimisation problem function eq_con!(param::Vector{Float64}, grad::Vector{Float64}) if length(grad) > 0 for j = 1:length(grad) grad[j] = 1.0 end end return(sum(param) - 1.0) end #Function to call the optimisation process with appropriate input parameters function do_opt(meanVec::Vector{Float64}, covMat::Matrix{Float64}, paramInit::Vector{Float64}) opt1 = Opt(:LD_SLSQP, length(meanVec)) lower_bounds!(opt1, [0.0, 0.0, 0.05, 0.0, 0.0, 0.0]) upper_bounds!(opt1, [1.0, 1.0, 1.0, 1.0, 1.0, 1.0]) equality_constraint!(opt1, eq_con!) ftol_rel!(opt1, 0.000001) fObj = ((param, grad) -> obj_func!(param, grad, meanVec, covMat)) max_objective!(opt1, fObj) (fObjOpt, paramOpt, flag) = optimize(opt1, paramInit) println("Returned parameter vector = " * string(paramOpt)) println("Return objective function = " * string(fObjOpt)) end #------------------------------------------- #Inputs to optimisation meanVec = [0.00238374894628471,0.0006879970888824095,0.00015027322404371585,0.0008440624572209092,-0.004949409024535505,-0.0011493778903180567] covMat = [8.448145928621056e-5 1.9555283947528615e-5 0.0 1.7716366331331983e-5 1.5054664977783003e-5 2.1496436765051825e-6; 1.9555283947528615e-5 0.00017068536691928327 0.0 1.4272576023325365e-5 4.2993023110905543e-5 1.047156519965148e-5; 0.0 0.0 0.0 0.0 0.0 0.0; 1.7716366331331983e-5 1.4272576023325365e-5 0.0 6.577888700124854e-5 3.957059294420261e-6 7.365234067319808e-6 1.5054664977783003e-5 4.2993023110905543e-5 0.0 3.957059294420261e-6 0.0001288060347757139 6.457128839875466e-6 2.1496436765051825e-6 1.047156519965148e-5 0.0 7.365234067319808e-6 6.457128839875466e-6 0.00010385067478418426] paramInit = [0.0,0.9496114216578236,0.050388578342176464,0.0,0.0,0.0] #Call the optimisation function do_opt(meanVec, covMat, paramInit) #Re-define initial parameters to very similar numbers paramInit = [0.0,0.95,0.05,0.0,0.0,0.0] #Call the optimisation function again do_opt(meanVec, covMat, paramInit) 

Note. I know that my covariance matrix is ​​positive semi-definite, not positive definite. This is not the source of the problem. I confirmed this by changing the diagonal element of the zero line to a small but significantly different non-zero value. The problem is still present in the above example, as well as others that I can randomly generate.

+4
source share
1 answer

SLSQP is a constrained optimization algorithm. Each round should check for the best objective value and satisfy the constraints. The end result is the best value when fulfilling constraints.

Print the value of the constraint by changing eq_con! on the:

 function eq_con!(param::Vector{Float64}, grad::Vector{Float64}) if length(grad) > 0 for j = 1:length(grad) grad[j] = 1.0 end end @show sum(param)-1.0 return(sum(param) - 1.0) end 

Shows the last valid evaluation point in the first run:

 Objective function value = 0.0007628202546187453 sum(param) - 1.0 = 0.0 

In the second run, all evaluation points satisfy the constraint. This explains the behavior and shows that it is reasonable.

ADDITION:

A significant problem leading to parameter instability is the exact nature of the equality constraint. Quote from the NLopt link ( http://ab-initio.mit.edu/wiki/index.php/NLopt_Reference#Nonlinear_constraints ):

For equality constraints, a small positive tolerance is required to allow NLopt to converge, even if the equality constraint is slightly different from zero.

Indeed, switching the equality_constraint! call equality_constraint! in do_opt on

  equality_constraint!(opt1, eq_con!,0.00000001) 

It gives a solution of 0.0022 for both initial parameters.

+2
source

All Articles