I was wondering if there is a good way (preferably using JuMP) to get all the optimal solutions of a linear program (if there are several optimal solutions).
Example
minimize the statistical distance (Kolmogorov distance) between the two probability distributions.
min sum_{i=1}^{4} |P[i] - Q[i]| over free variable Q P = [0.25,0.25,0.25,0.25] sum_i P[i] = 1 Q[1] + Q[4] = 1 sum_i Q[i] = 1 -> Q[2],Q[3] = 0
Please note that we can formulate optimization as a linear program, the goal becomes
min S >= sum_i S[i] S[i] >= P[i]-Q[i] S[i] >= Q[i]-P[i]
There is no single solution to this problem; instead, the subspace of the optimal solution is spanned by
Q1 = [0.75,0,0,0.25] Q2 = [0.25,0,0,0.75]
Both have a minimum distance of 0.5, any convex combination of these two solutions is optimal.
I was wondering if there is a good way to find all these optimal extremal points (points that span the optimal subspace)?
Why am I interested in this? the points that give the maximum coefficient bhattacharyya (concave function) lie somewhere in the middle of the optimal subspace of the static distance.
So far I have tried to find the optimal pairs P, Q (referring to the example I gave), forcing the algorithm to approve the minimization of the distance between P [i], Q [i], adding to this weight a 1.001 term in total. This seems to work to some extent, although I can hardly know for sure.