Which linear programming package should I use for a lot of restrictions and “warm starts”,

I have a “continuous” linear programming problem that involves maximizing a linear function over a curved convex space. In typical LP problems, a convex space is a polyhedron, but in this case the convex space is piecewise curved, i.e. It has faces, edges and vertices, but the edges are not straight and the faces are not flat. Instead of being given a finite number of linear inequalities, I have an infinitely infinite number. I am currently doing this by approximating the surface with a polytope, which means discretizing infinitely limited constraints into a very large finite number of constraints.

I am also in a situation where I would like to know how the answer changes with small disturbances to the main problem. Thus, I would like to be able to provide the initial condition to the solver based on the closest solution. I believe this ability is called a warm start.

Can someone help me distinguish between different LP packages there? I'm not as interested in usability as speed (for a lot of restrictions), high-precision arithmetic and a warm start.

Thanks!

EDIT: Judging by the conversation with the respondents of the question, I should be more clear in the question that I am trying to solve. Simplified version:

I have N fixed functions f_i (y) of one real variable y. I want to find x_i (i = 1, ..., N) that minimize \ sum_ {i = 1} ^ N x_i f_i (0), subject to the restrictions:

  • \ sum_ {i = 1} ^ N x_i f_i (1) = 1 and
  • \ sum_ {i = 1} ^ N x_i f_i (y)> = 0 for all y> 2

More concisely, if we define the function F (y) = \ sum_ {i = 1} ^ N x_i f_i (y), then I want to minimize F (0) provided that F (1) = 1 and F (y ) is positive over the entire interval [2, infinity). Note that this last positivity condition is indeed an infinite number of linear constraints for x_i, one for each y. You can think of y as a label - this is not an optimization variable. The concrete y_0 limits me to the half-space F (y_0)> = 0 in the space x_i. As I change y_0 between 2 and infinity, these half-spaces change continuously, carving a curved convex shape. The geometry of this form implicitly (and in a complex way) depends on the functions f_i.

+2
linear programming
source share
3 answers
  • Regarding the recommendations of the LP reseller, two of the best are Gurobi and CPLEX (google them). They are free for academic users and able to solve large-scale problems. I believe that they have all the features you need. You can get sensitivity information (perturbation) from shadow prices (i.e., Lagrange Multipliers).

  • But I'm more interested in your original problem. As I understand it, it looks like this:

Let S = {1,2, ..., N}, where N is the total number of functions. y is the scalar. f_ {i}: R ^ {1} → R ^ {1}.

minimize sum{i in S} (x_{i} * f_{i}(0)) x_{i} st (1) sum {i in S} x_{i} * f_{i}(1) = 1 (2) sum {i in S} x_{i} * f_{i}(y) >= 0 for all y in (2,inf] 
  • It seems to me that you can try to solve this problem as convex NLP, not LP. Large-scale internal NLP solvers, such as IPOPT, should be able to easily deal with these issues. I highly recommend trying IPOPT http://www.coin-or.org/Ipopt

  • From a numerical point of view: for convex problems, a thermal start is not required using solutions of an internal point; and you don’t need to worry about combinatorial cycling of active sets. What you described as a “warm start” actually breaks the decision, which is more like a sensitivity analysis. In an optimization language, a thermal run usually means providing a solver with an initial guess — the solver will take this guess and end up with the same solution, which is actually not the way you want it to. The only exception is if the active set changes with a different initial guess - but for a convex problem with a unique optimum this cannot be.

If you need more information, I would be happy to provide it.

EDIT:

Sorry for the non-standard notation - I would like to introduce in LaTeX, as on MathOverflow.net. (By the way, you can try to publish it there - I think that mathematicians there would be interested in this problem)

And now I see about "y> 2". This is not just an optimization constraint, but an interval defining a space (I edited my description above). To blame. I wonder if it is possible to somehow transform / project the problem from infinite to finite? I can't think of anything right now, but I'm just wondering if this is possible.

So your approach is to discretize the problem for y in (2, inf]. I assume that you are choosing a very large number to represent inf and a fine discretization grid. Ohhh complicated. I suppose discretization is probably your best option Maybe the guys who do this analysis have ideas.

I saw something similar for problems associated with Lyapunov functions when it was necessary to provide a property at every point inside a convex hull. But this space was finite.

+2
source share

I ran into a similar problem. I searched the Internet and found that this problem can be classified as a "semi-infinite" problem. MATLAB has tools to solve such problems (function "fseminf"). But I did not check it in detail. Of course, people are faced with such questions.

+1
source share

You should not use the LP solver and do the sampling yourself. You can do much better by using a decent general convex solver. Check for example cvxopt . This can handle many different functions in your constraints or allow you to write your own. This will be much better than trying to linearize yourself.

As for a warm start, for LP this is more important than the general convex program. Although a warm start can potentially be useful if you pass the whole algorithm yourself, you usually still need a few Newton steps, so the benefits are not that significant. Most of the benefits of a warm start come in things like active dialing methods, which are mostly used only for LPs.

0
source share

All Articles