Discrete Optimization Models
Introduction
LPs are easy to solve. In other workds, efficient algorithms exist to solve LPs. However, many real-world problems are not linear in the sense that their solutions need to be integer. In this case, we need to use discrete optimization models. The mathematical programming models that involve discrete (or integer including binary) variables are called discrete optimization models. One of the most common discrete optimization models is the mixed integer linear programming model (MILP) where soem of the variables take integer values while others take continuous values. First, we will review some of the discrete optimization models and then we will discuss the solution methods for these models.
Fixed Charge Problem
The fixed charge problem is a type of problem where a fixed cost is incurred when a decision variable is used. The fixed charge problem can be formulated as follows:
\[\begin{align*} \text{minimize} \quad & \sum_{i=1}^{n} c_i x_i + \sum_{j=1}^{m} f_j y_j \\ \text{subject to} \quad & \sum_{i=1}^{n} a_{ij} x_i \leq b_j y_j, \quad j=1,2,\ldots,m \\ & x_i \geq 0, \quad i=1,2,\ldots,n \\ & y_j \in \{0,1\}, \quad j=1,2,\ldots,m \end{align*}\]
where \(x_i\) is the continuous variable, \(y_j\) is the binary variable, \(c_i\) is the cost of using \(x_i\), \(f_j\) is the fixed cost of using \(y_j\), \(a_{ij}\) is the coefficient of \(x_i\) in the \(j\)th constraint, and \(b_j\) is the right-hand side of the \(j\)th constraint. The fixed charge problem is a special case of the MILP problem where all the variables are continuous except for the binary variables.
See examples in the slides.
Knapsack Problem
The knapsack problem is a type of problem where a decision maker has a knapsack with a limited capacity and a set of items with different weights and values. The decision maker needs to decide which items to put in the knapsack in order to maximize the total value of the items in the knapsack without exceeding the capacity of the knapsack. The knapsack problem can be formulated as follows:
\[\begin{align*} \text{maximize} \quad & \sum_{i=1}^{n} v_i x_i \\ \text{subject to} \quad & \sum_{i=1}^{n} w_i x_i \leq W \\ & x_i \in \{0,1\}, \quad i=1,2,\ldots,n \end{align*}\]
where \(x_i\) is the binary variable that indicates whether the \(i\)th item is put in the knapsack, \(v_i\) is the value of the \(i\)th item, \(w_i\) is the weight of the \(i\)th item, and \(W\) is the capacity of the knapsack. The knapsack problem is a special case of the MILP problem where all the variables are binary.
See examples in the slides.
Mutually Exclusive Constraints
The knapsack problem can be extended to include mutually exclusive constraints. In this case, the decision maker needs to decide which items to put in the knapsack in order to maximize the total value of the items in the knapsack without exceeding the capacity of the knapsack and satisfying the mutually exclusive constraints. The knapsack problem with mutually exclusive constraints can be formulated as follows:
\[\begin{align*} \text{maximize} \quad & \sum_{i=1}^{n} v_i x_i \\ \text{subject to} \quad & \sum_{i=1}^{n} w_i x_i \leq W \\ & x_i \in \{0,1\}, \quad i=1,2,\ldots,n \\ & \sum_{i \in S_j} x_i \leq 1, \quad j=1,2,\ldots,m \end{align*}\]
where \(S_j\) is the set of items that are mutually exclusive with each other.
Dependence Constraints
The knapsack problem can be extended to include dependence constraints. In this case, the decision maker needs to decide which items to put in the knapsack in order to maximize the total value of the items in the knapsack without exceeding the capacity of the knapsack and satisfying the dependence constraints. The knapsack problem with dependence constraints can be formulated as follows:
\[\begin{align*} \text{maximize} \quad & \sum_{i=1}^{n} v_i x_i \\ \text{subject to} \quad & \sum_{i=1}^{n} w_i x_i \leq W \\ & x_i \in \{0,1\}, \quad i=1,2,\ldots,n \\ & x_i \leq x_j, \quad (i,j) \in D \end{align*}\]
where \(D\) is the set of dependence constraints that indicate that item \(i\) can be put in the knapsack only if item \(j\) is also put in the knapsack.
See examples in the slides.
Set Covering Problem
The set covering problem is a type of problem where a decision maker needs to decide which subsets to select in order to cover all the elements in a universal set with the minimum number of subsets. The set covering problem can be formulated as follows:
\[\begin{align*} \text{minimize} \quad & \sum_{j=1}^{m} c_j x_j \\ \text{subject to} \quad & \sum_{j \in S_i} x_j \geq 1, \quad i=1,2,\ldots,n \\ & x_j \in \{0,1\}, \quad j=1,2,\ldots,m \end{align*}\]
where \(x_j\) is the binary variable that indicates whether the \(j\)th subset is selected, \(c_j\) is the cost of selecting the \(j\)th subset, \(S_i\) is the set of subsets that cover the \(i\)th element in the universal set, and \(n\) is the number of elements in the universal set.
See examples in the slides.
Set Partitioning Problem
The set partitioning problem is a type of problem where a decision maker needs to decide which subsets to select in order to partition the elements in a universal set into disjoint subsets with the minimum number of subsets. The set partitioning problem can be formulated as follows:
\[\begin{align*} \text{minimize} \quad & \sum_{j=1}^{m} c_j x_j \\ \text{subject to} \quad & \sum_{j=1}^{m} x_j = 1, \quad i=1,2,\ldots,n \\ & x_j \in \{0,1\}, \quad j=1,2,\ldots,m \end{align*}\]
where \(x_j\) is the binary variable that indicates whether the \(j\)th subset is selected, \(c_j\) is the cost of selecting the \(j\)th subset, and \(n\) is the number of elements in the universal set.
See examples in the slides.
Set Packing Problem
The set packing problem is a type of problem where a decision maker needs to decide which subsets to select in order to pack the elements in a universal set into disjoint subsets with the maximum number of subsets. The set packing problem can be formulated as follows:
\[\begin{align*} \text{maximize} \quad & \sum_{j=1}^{m} c_j x_j \\ \text{subject to} \quad & \sum_{j \in S_i} x_j \leq 1, \quad i=1,2,\ldots,n \\ & x_j \in \{0,1\}, \quad j=1,2,\ldots,m \end{align*}\]
where \(x_j\) is the binary variable that indicates whether the \(j\)th subset is selected, \(c_j\) is the cost of selecting the \(j\)th subset, \(S_i\) is the set of subsets that pack the \(i\)th element in the universal set, and \(n\) is the number of elements in the universal set.
See examples in the slides.
Solution Methods
Total Enumeration
The total enumeration method is a solution method for discrete optimization models that involves enumerating all possible solutions and selecting the best one. The total enumeration method is a brute-force method that is not practical for large-scale problems due to the exponential growth of the number of solutions with the number of decision variables. The total enumeration method is suitable for small-scale problems with a limited number of decision variables.
Relaxation
Relaxations are formed by weakening constraints and/or objective functions of the given discrete optimization model. Relaxed models are more tractable than the original models. Linear programming relaxations are formed by treating any discrete variable as continuous while retaining all other constraints.
The optimal value of any relaxation of a maximize model is an upper bound on the optimal value of the original model. The optimal value of any relaxation of a minimize model is a lower bound on the optimal value of the original model.
The objective function value of any integer feasible solution to a maximizing discrete optimization model is a lower bound on the optimal value of the original model. The objective function value of any integer feasible solution to a minimizing discrete optimization model is an upper bound on the optimal value of the original model.
See examples in the slides.
Branch and Bound
The LP based branch and bound algoritm solves LP sub-problems to find the optimal solution of the original MILP problem. It starts with LP relaxed model and then solves the LP relaxation. If the solution is integer, then the algorithm terminates. Otherwise, the algorithm branches on a fractional variable and creates two sub-problems. The algorithm continues to solve the sub-problems until the optimal solution is found.
See examples in the slides.
References
1: Optimization in OR textbook by R. Rardin