Skip to main content

Chapter 46 Linear Optimization

Although we saw in Chapter 44 a number of techniques to optimize functions, if the objective function is linear and in general there are a number of constraints that are linear as well, then problems fall into the field of Linear Optimization and we will see that the solution techniques are different.
To start with a simple problem, consider a baker with a shop that only makes cookies and muffins
 1 
Just go with me on this one, yes, no bakery would do this.
. A batch of cookies takes 5 cups of flour and 6 cups of sugar (among other ingredients) and a batch of muffins takes 4 cups of flour and 3 cups of sugar. One day the bakery has 90 cups of flour and 81 cups of sugar. If the profit on a batch of cookies is $11 and on the muffins in $8, how many batches of each baked good should the baker make?
Mathematically, if we let \(x\) be the number of batches of cookies and \(y\) to be the number of batches of muffins, then the objective function is the profit which is \(z=11x+8y\text{.}\) There are two constraints, one on the total amount of flour and one on the sugar. These can be written
\begin{align*} 5x+ 4y\amp \leq 90 \\ 6x + 3y \amp \leq 81 \end{align*}
and lastly, there are two other common constraints that the number of batches produced cannot be negative so \(x \geq 0, y \geq 0\text{.}\) These four constraints with the objective function are all linear and thus this problem falls into a Linear Optimization problem.

Section 46.1 Simplex Method

The standard technique for solving linear optimization problems fall into what is called the simplex method and we will just touch on this technique here.
Let’s return to the example above. If we plot the 4 contraints above on a set of axes, we get the following:
(for accessibility)
Figure 46.1.
The contraints lead to a feasible set which is the part of the plane that is feasible (satisfies all constraints). The region above is the bounded polygon northeast of the origin.
This example is in 2D and is then a subset of the \(\mathbb{R}^2\) plane. You also may notice that this set is convex (every line segment with endpoints in the set is fully contained in the set). This is true of not only any feasible set in the plane, but any feasible set in any dimensional is convex. See ?? (Hurlbert).
Since we have noticed (but not proven) that feasible sets are convex, it also goes to show that optimal points (those that minimize or maximum the objective function) are on the boundary of the set. The simplex method is a method that starts at the origin and walks around vertices of the feasible set until the optimal point is found.
We won’t go into details of the simplex method in this book, but leave you to other sources (like Hurlbert) for the details. However, for this simple case as above, if we write down all of the vertices of the feasible and check the objective function at each of these, the result can be found quite easily.
If we solve for the intersection points of the the appropriate curves, then the following are the vertices of the feasible set above: \((0,0), (13.5,0), (6,15), (0,22.5) \text{.}\) If we then plug these points into the objective function \(z=11x+8y\) we get:
\(x\) \(y\) \(z\)
0 0 0
13.5 0 148.5
6 15 186
0 22.5 180
And from the table, the profit is maximized at \((6,15)\) in which 6 batches of cookies are made and 15 batches of muffins. The total profit is $186.

Section 46.2 Using JuMP to solve Linear Optimization Problems

Although we could use the built-in tools to solve linear optimization problems using Julia, we will turn to the the JuMP package of Julia. JuMP is a modeling language that is designed to set up problems in optimization, hand it off to any of a large number of solvers and then get the results back. The way that problems are written out are natural, which makes it quite easy to learn.
As noted JuMP hands a problem off to a solving routine to do the hard work of solving the problem. In Chapter 44, we used the Ipopt solving package to minimize a function of two variables. In the case of linear optimization we will use the HiGHS package to solve. Let’s jump in
 2 
Pun intended!
.
First, use the package manager to download and install the JuMP and HiGHS packages (yes capitalization is important here) and then start with
using JuMP, HiGHS
We now build the model of the problem above with
m = Model(HiGHS.Optimizer)
@variable(m, x ≥ 0)
@variable(m, y ≥ 0)
@objective(m, Max, 11x+8y)
@constraint(m, 5x+4y ≤ 90)
@constraint(m, 6x+3y ≤ 81)
print(m)
and the result is
Max 11 x + 8 y
Subject to
 5 x + 4 y ≤ 90
 6 x + 3 y ≤ 81
 x ≥ 0
 y ≥ 0
And there isn’t much to explain here as each line is self-explanatory. Just notice that you need to create a Model (with an solver from another package), then create variables, objective function and constraints. The last line just prints out the model (not the solution).
to get the solution, we enter optimize!(m) (remember that the ! on the name of a function changes the input).
Running HiGHS 1.9.0 (git hash: 66f735e60): Copyright (c) 2024 HiGHS under MIT licence terms
Coefficient ranges:
  Matrix [3e+00, 6e+00]
  Cost   [8e+00, 1e+01]
  Bound  [0e+00, 0e+00]
  RHS    [8e+01, 9e+01]
Presolving model
2 rows, 2 cols, 4 nonzeros  0s
2 rows, 2 cols, 4 nonzeros  0s
Presolve : Reductions: rows 2(-0); columns 2(-0); elements 4(-0) - Not reduced
Problem not reduced by presolve: solving the LP
Using EKK dual simplex solver - serial
  Iteration        Objective     Infeasibilities num(sum)
          0    -4.7499947580e+00 Ph1: 2(4.5); Du: 2(4.74999) 0s
          2     1.8600000000e+02 Pr: 0(0) 0s
Model status        : Optimal
Simplex   iterations: 2
Objective value     :  1.8600000000e+02
Relative P-D gap    :  0.0000000000e+00
HiGHS run time      :          0.00
I won’t go through the details of this. You would need to dig into the HiGHS package documentation for all of the details. One thing to note on line 16 is that the result is that it is Optimal (that is there is a solution). The objective is on line 18 and 1.860000e+02 or 186 that we found above.
Typically when I run the optimize! function, I don’t look at the output code. We will see how to pull the important important. Adding the line set_silent(m) before calling the optimize!(m) will supress this output. However, it is important to note that the solution is in fact optimized (we will see an example where this isn’t the case below). If we call is_solved_and_feasible(m), then we get true indicating that we have a solution.
Typically, we want to know the values of the variables and objective function at the optimal point. We can get the values of the two variables x and y here with wrapping the method value around each. Entering value(x), value(y) returns (5.999999999999997, 15.000000000000005) which is close to the result we got above of (6,15), the difference being some numerical error. The value of the objective function can be found with objective_value(m), which returns 186.0.

Section 46.3 Integer Problems

Perhaps it seems odd that we got

Section 46.4 Binary Variables

There are many problems in which variables take on binary values (0 or 1) depending on if something is present or not. Consider the following problem

Problem 46.2. The Mailbox Problem.

A city is shaped like the following grid:
Figure 46.3.
The city would like to place mailboxes at the intersections of the streets such that no-one needs to travel more than one block to reach a mailbox. To improve the budget of the city, the mayor would like to place the smallest number of mailboxes. Where should they be placed?
Solution 1. Setting up the Problem as a LOP
First, we will say let the variable \(x_{i,j}\) be 1 if there is a mailbox at the intersection of the \(i\)th street and the \(j\)th avenue. The variable will be 0 if not. The constraints for this problem is that for each intersection, there is only 1 mailbox within one block. For example, the total number of mailboxes with the upper left and the intersection to the east and the south is at least 1 or
\begin{equation*} x_{1,1} + x_{1,2} + x_{2,1} \geq 1 \end{equation*}
and similarly for each intersection. There will be the following fifteen constraints:
\begin{align*} x_{1,1} + x_{1,2} + x_{2,1} \amp \geq 1 \amp x_{1,1} + x_{1,2} + x_{1,3} + x_{2,2} \amp \geq 1 \\ x_{1,2} + x_{1,3} + x_{1,4} + x_{2,3} \amp \geq 1 \amp x_{1,3} + x_{1,4} + x_{1,5} + x_{2,4} \amp \geq 1 \\ x_{1,4} + x_{1,5} + x_{2,5} \amp \geq 1 \amp x_{2,1} + x_{1,1} + x_{3,1} + x_{2,2}\amp \geq 1\\ x_{2,1} + x_{2,2} + x_{2,3} + x_{1,2} + x_{3,2} \amp \geq 1 \amp x_{2,2} + x_{2,3} + x_{2,4} + x_{1,3} + x_{3,3} \amp \geq 1 \\ x_{2,3} + x_{2,4} + x_{2,5} + x_{1,4} + x_{3,4} \amp \geq 1 \amp x_{2,4} + x_{2,5} + x_{1,5} + x_{3,5} \amp \geq 1 \\ x_{3,1} + x_{2,1} + x_{3,2} \amp \geq 1 \amp x_{3,1} + x_{3,2} + x_{3,3} + x_{2,2} \amp \geq 1\\ x_{3,2} + x_{3,3} + x_{3,4} + x_{2,3} \amp \geq 1 \amp x_{3,3} + x_{3,4} + x_{3,5} + x_{2,4} \amp \geq 1 \\ x_{3,4} + x_{3,5} + x_{2,5} \amp \geq 1 \end{align*}
And the objective function is the sum of all of the \(x_{i,j}\) is it is to be minimized.
Solution 2. Solving the LOP using JuMP
We now look at the solution to this problem using JuMP. First, we can define the model in the same way as above with m2 = Model(HiGHS.Optimizer) and then defining the variables as a matrix with @variable(m2, x[1:3, 1:5], Bin)
The constraints are put in in the following way. We can enter them in one at a time, however, below we will extend this problem and it is nicer to define them as corners, edges and interior intersections. First the 4 corners can be added as
@constraint(m2, x[1,1]+x[1,2] + x[2,1] ≥ 1)
@constraint(m2, x[1,4]+x[1,5] + x[2,5] ≥ 1)
@constraint(m2, x[3,1]+x[2,1] + x[3,2] ≥ 1)
@constraint(m2, x[3,5]+x[3,4] + x[2,5] ≥ 1)
Next, the top and bottom edges are added with
for j=2:4
  @constraint(m2, x[1,j-1]+x[1,j]+x[1,j+1]+x[2,j] ≥ 1)
  @constraint(m2, x[3,j-1]+x[3,j]+x[3,j+1]+x[2,j] ≥ 1)
end
Next, the left are right edges are added with
for i=2:2
  @constraint(m2, x[i,1]+x[i-1,1] + x[i+1,1] + x[i,2] ≥ 1)
  @constraint(m2, x[i,5] + x[i-1,5] + x[i+1,5] + x[i,4] ≥ 1)
end
where the for loop is unecessary, but will be used below. Lastly, the interior points are
for i=2:2
  for j=2:4
    @constraint(m2, x[i,j] + x[i,j-1] + x[i,j+1] + x[i-1,j] + x[i+1,j] ≥ 1)
  end
end
and again the i for loop is unnecessary, but will be adapted below. Lastly, the objective function can be written as
@objective(m2, Min, sum(x[i,j] for i=1:3, j=1:5))
Finally, we set the output to silent and then optimize with
set_silent(m2)
optimize!(m2)
and since the output has been set to silent, there is no output but checking for a solution with is_solved_and_feasible(m2) results in true, so we have a solution and can see it with value.(x), where the . indicates broadcasting across the matrix and the output is
3×5 Matrix{Float64}:
 -0.0  -0.0   1.0  -0.0  0.0
  1.0  -0.0  -0.0  -0.0  1.0
 -0.0  -0.0   1.0  -0.0  0.0
and note that even though the values of x had the constraint that they are binary (therefore integers), the matrix is a matrix of floats due to the method of solution. We can make this look a little more expected with round.(Int,value.(x)) and the result is
3×5 Matrix{Int64}:
 0  0  1  0  0
 1  0  0  0  1
 0  0  1  0  0
and the 1s indicate where the mailboxes ought to be placed and that there only need to be 4.

Section 46.5 Infeasible Problems

Let’s look at a simple Linear Optimization Problem. Maximize \(z = 3x+4y\) subject to
\begin{align*} \text{Maximize}\quad z \amp = 3x+4y \text{subject to}\\ x + y \amp \leq 10 \\ -2x + y \amp \geq 2 \\ x + 3 y \amp \geq 6\\ x, y \amp \geq 0 \end{align*}