The Simplex (rudimentary) Algorithm
Introduction
The Simplex Algorithm is a method for solving linear programming problems. It was developed by George Dantzig in 1947. The algorithm is based on the concept of a simplex, which is a higher-dimensional analogue of a line segment. The simplex algorithm works by moving from one vertex of the feasible region to an adjacent vertex, and it continues to do so until it reaches the optimal solution. The algorithm is guaranteed to terminate in a finite number of steps, and it is one of the most widely used methods for solving linear programming problems. There are various versions of the simplex algorithm, including the revised simplex algorithm, the dual simplex algorithm, and the criss-cross algorithm. I will be introducting the main ideas of the simplex algorithm. We can summarize the simplex algorithm steps as follows:
- Start with a basic feasible solution.
- Compute simplex directions.
- Adopt and improving direction.
- Determine the step size.
- Update the basis (basic feasible solution).
Let’s take the table solution for the Top Brass example below.
x1 | x2 | x3 | x4 | x5 | x6 | b | |
---|---|---|---|---|---|---|---|
max c | 12 | 9 | 0 | 0 | 0 | 0 | |
1 | 0 | 1 | 0 | 0 | 0 | 1000 | |
A | 0 | 1 | 0 | 1 | 0 | 0 | 1500 |
1 | 1 | 0 | 0 | 1 | 0 | 1750 | |
4 | 2 | 0 | 0 | 0 | 1 | 4800 | |
N | N | B | B | B | B | ||
x(0) | 0 | 0 | 1000 | 1500 | 1750 | 4800 |
Step 1: Start with a basic feasible solution
The basis is:
\(B = \{x_3, x_4, x_5, x_6\}\). When you look at the \(A\) matrix, you can see that the columns corresponding to the basis are linearly independent and provides the unique solution that is exactly the \(b\) vector values. Try to enter this A matrix into the ‘solve’ python code under colab files. You will see that the solution is exactly the \(b\) vector.
Step 2: Compute simplex directions
Simplex algorithm is an improvign search algorithm that moves from one solution to another retaining the feasibility and improving the objective function value until a local optimum (remember local optimum = glocal optimum for LPs) is reached.
In order to construct simplex directions, we increase a single non-basic variable leaving other non-basic variables at zero and compute the changes in basic variables to preserve equality constraints. We can formuate this as follows:
\(\triangle x_j = 1\) for non-basic variable \(j\)
\(A \triangle x = 0\)
We find as many directions as the number of non-basic variables. For the Top Brass example table above, we have 2 non-basic variables (\(x_1\) and \(x_2\) ). We can compute the simplex direction with unit increase in \(x_1\) (\(\triangle x_1 = 1\)) the as follows:
\[ \begin{align*} & +1(1) +0(0) +1 \Delta x_3 +0 \Delta x_4 +0 \Delta x_5 +0 \Delta x_6 = 0 \\ & +0(1) +1(0) +0 \Delta x_3 +1 \Delta x_4 +0 \Delta x_5 +0 \Delta x_6 = 0 \\ & +1(1) +1(0) +0 \Delta x_3 +0 \Delta x_4 +1 \Delta x_5 +0 \Delta x_6 = 0 \\ & +4(1) +2(0) +0 \Delta x_3 +0 \Delta x_4 +0 \Delta x_5 +1 \Delta x_6 = 0 \end{align*} \]
This system of equations are solved for \(\Delta x_3, \Delta x_4, \Delta x_5, \Delta x_6\) and we obtain the following direction:
\(\triangle x^{1} = (1, 0, -1, 0, -1, -4)\)
Special note: unit increase in one of the non-basic variables will take us to a neighboring solution of the current basic solution. That’s the fact that neighbor solutions have constraints in common except for one.
See example 5.11 in the textbook for constructing simplex directions.
Step 3: Improving simplex directions
We have computed the simplex directions in the previous step. Now we need to adopt and improving direction. We need to check if the direction is improving or not. We can do this by checking the reduced costs. The reduced cost of a non-basic variable is the amount by which the objective function value would improve if the variable were increased by one unit. The reduced cost of a non-basic variable is given by:
\(\overline{c_j} = \mathbf{c} \cdot \Delta\mathbf{x}\)
where \(\mathbf{c}\) is the cost vector and \(\Delta\mathbf{x}\) is the simplex direction. If the reduced cost of a non-basic variable is negative, then increasing the variable will improve the objective function value (for minimization objectives). If the reduced cost of a non-basic variable is zero, then increasing the variable will not change the objective function value. If the reduced cost of a non-basic variable is positive, then increasing the variable will worsen the objective function value.
In order to derive the reduced cost equation we take the gradient of the objective function and multiply it with the simplex direction. The gradient of the objective function is the cost vector:
\(f(\mathbf{x}) \triangleq \mathbf{c} \cdot \mathbf{x} \triangleq \sum_{j=1}^{n} c_j x_j\)
\(\nabla f(\mathbf{x}) = \mathbf{c} \triangleq (c_1, c_2, \ldots, c_n) \text{ for all } \mathbf{x}\)
Step 4: Determine the step size
We have computed the simplex directions and checked if the direction is improving or not. Now we need to determine the step size. The step size is the amount by which we should increase the non-basic variable in order to move to the neighboring solution. The step size is given by:
\(x^{'} = x^{o} + \lambda \Delta x\)
where \(x^{'}\) is the new solution, \(x^{o}\) is the current solution, \(\Delta x\) is the simplex direction, and \(\lambda\) is the step size. The step size is determined by the following equation:
\(\lambda = \min \left\{ \frac{x_{j}^{(t)}}{-\Delta x_{j}} : \Delta x_{j} < 0 \right\}\)
where \(x_{j}^{(t)\) is j-th component of the current solution and \(\Delta x_{j}\) is j-th component of the simplex direction. Notice that we are interested in the negative components of the simplex direction. The step size is the minimum of the ratios of the current solution components to the negative components of the simplex direction. If all components of the simplex direction are non-negative, then the step size is infinity and the problem is unbounded.
Step 5: Update the basis
We have computed the simplex directions, checked if the direction is improving or not, and determined the step size. Now we need to update the basis. The basis is updated by replacing the non-basic variable that is increased by the basic variable that is decreased. The basis is updated by the following equation:
\(B^{'} = B \cup \{x_{j}\} \setminus \{x_{k}\}\)
where \(B^{'}\) is the new basis, \(B\) is the current basis, \(x_{j}\) is the non-basic variable that is increased, and \(x_{k}\) is the basic variable that is decreased.
References
1: Optimization in OR textbook by R. Rardin