Welcome to LROPT’s documentation!¶
Learning for robust optimization (LROPT) is a package to model and solve optimization Robust Optimization (RO) problems of the form
where \(u\) denotes the uncertain parameter and \(\mathcal{U(\theta)}\) denotes the uncertainty set for the problem. Rather than forcing you to perform the often intensive math required to reduce this problem to a convex problem, LROPT lets you express RO problems naturally. There are two main ways to use LROPT:
By explicitly defining an uncertainty set \(\mathcal{U}\) and its parameterization \(\theta\)
By passing a dataset \(U^N\) of past realizations of \(u\) and letting LROPT learn \(\theta\)
Simple Examples¶
Predetermined Uncertainty Set¶
Lets see an example of the first case. Suppose we want to solve the following optimization problem, using an ellipsoidal uncertainty set:
where \(\mathcal{U_\text{ellip}(A,b)} = \{u \mid \| Au + b \|_2 \leq 1 \}\). We would use LROPT to write the following:
import cvxpy as cp
import numpy as np
import lropt
n = 4
np.random.seed(0)
A_unc = np.eye(n)
b_unc = np.random.rand(n)
P = np.random.rand(n,n)
a = np.random.rand(n)
c = np.random.rand(n)
d = 10
# Formulate robust constraints with lropt
unc_set = Ellipsoidal(A = A_unc, b = b_unc)
a = UncertainParameter(n,
uncertainty_set=unc_set)
constraints = [(P @ u + a).T @ x <= d]
objective = cp.Minimize(c @ x)
prob_robust = RobustProblem(objective, constraints)
prob_robust.solve()
Learned Uncertainty Set¶
One of the most difficult modeling problems in RO is determining what \(\mathcal{U}(\theta)\) should be. Let’s now use LROPT to solve the same problem, but this time passing in a dataset \(U^N\) and letting LROPT learn \(\theta\). Under the hood, training is done using pytorch, so users must additionally pass in representations of the objective and constraints with uncertainty (we are working to remove the need for this feature). Because Ellipsoidal uncertainty sets have a continuous boundary, they are best for learning. For more details on how this learning is done, see our associated paper.
import torch
import cvxpy as cp
import numpy as np
import lropt
n = 4
N = 100
norms = npr.multivariate_normal(np.zeros(n), np.eye(n), N)
u_data = np.exp(norms)
np.random.seed(0)
P = np.random.rand(n,n)
a = np.random.rand(n)
c = np.random.rand(n)
# Formulate robust constraints with lropt
a = UncertainParameter(n,
uncertainty_set=Ellipsoidal(data = u_data))
constraints = [(P @ u + a).T @ x <= d]
objective = cp.Minimize(c @ x)
c_tch = torch.tensor(c, dtype=float)
a_tch = torch.tensor(a, dtype=float)
P_tch = torch.tensor(P, dtype=float)
def f_tch(x,u):
return c_tch @ x
def g_tch(x,u):
return (P_Tch @ u + a_tch).T @ x - d
prob_robust = RobustProblem(objective, constraints, f_tch, [g_tch])
prob_robust.train(step = 10)
prob_robust.solve()
Families of problems¶
LROPT can also build uncertainty sets which generalize to a family of optimization problems, parameterized by some \(y\), where optimal solutions are now functions both of \(\theta\) and \(y\)
In these instances, the user passes a dataset of \(Y^J\) of \(y\)’s we can use LROPT to learn a \(\theta\) which generalizes well for the entire family of optimization problems. To write it up, we can consider the following example.
This would be written as
import torch
import cvxpy as cp
import numpy as np
import lropt
n = 4
N = 100
norms = npr.multivariate_normal(np.zeros(n), np.eye(n), N)
u_data = np.exp(norms)
num_instances = 10
y_data = npr.multivariate_normal(np.zeros(n), np.eye(n), num_instances)
y = Parameter(n, data=y_data)
u = UncertainParameter(n, uncertainty_set=Ellipsoidal(data=self.data))
a = npr.randint(3, 5, n)
c = 5
x = cp.Variable(n)
objective = cp.Maximize(a @ x)
a_tch = torch.tensor(a, dtype=float)
c_tch = torch.tensor(c, dtype=float)
constraints = [x @ (u + y) <= c, cp.norm(x) <= 2*c]
def f_tch(x,y,u):
return a_tch @ x
def g_tch(x,y,u):
return x @ u + x @ y - c_tch
prob = RobustProblem(objective, constraints,
objective_torch=f_tch, constraints_torch=[g_tch])
prob.train(step=10)
