Nothing wrong with optimx . Take a step back and look at the function you want to maximize: log(p1) + log(p2) + lamda*(p1+p2-1) . It is completely intuitively clear that the optimal solution is to make all the variables as possible as possible, no? See that optimx rightfully returned the upper bounds you specified.
So what is wrong with your approach? When using Lagrange multipliers, the critical points are the saddle points of your function above, and not local minima like optimx will help you find. Therefore, you need to change your problem so that these saddle points become local minima. This can be done by optimizing the gradient rate, which is easy to calculate analytically for your problem. Here is a great example (with pictures):
http://en.wikipedia.org/wiki/Lagrange_multiplier#Example:_numerical_optimization .
For your problem:
grad.norm <- function(x) { lambda <- tail(x, 1) p <- head(x, -1) h2 <- sum((1/p + lambda)^2) + (sum(p) - 1)^2 } optimx(c(.48, .52, -1.5), grad.norm, lower = c(rep(.1, 2), -3), upper = c(rep(.9, 2), -1), method = "L-BFGS-B")
Follow after . If you do not want or cannot independently calculate the gradient, you can let R calculate the numerical approximation, for example:
log.like <- function(x) { lambda <- tail(x, 1) p <- head(x, -1) return(sum(log(p)) + lambda*(sum(p) - 1)) } grad.norm <- function(x) { require(numDeriv) return(sum(grad(log.like, x)^2)) } optimx(c(.48, .52, -1.5), grad.norm, lower = c(rep(.1, 2), -3), upper = c(rep(.9, 2), -1), method = "L-BFGS-B")