Creation date: 2017-07-27

After eight months in Australia I finally had time to finish this blog article. Welcome back! ;)

You solved a lot of Sudokus recently?

You're still interested in that? Check out my series on building a Constraint Programming Solver from Scratch.

Well let's get to real world problems... There is a thing called linear programming. The easiest way to explain it is using an example.

Let's assume you're running a business where you can produce three different types of products A,B and C. We can refer to them as apples,bananas and carrots. We have to do some stuff as cutting and peeling the vegetables. Therefore we have four different types of machines which are faster for one vegetable and slower for another.

Machine | Product A | Product B | Product C | Availability |
---|---|---|---|---|

I | 20 | 10 | 10 | 2400 |

J | 12 | 28 | 16 | 2400 |

K | 15 | 6 | 16 | 2400 |

L | 10 | 15 | 0 | 2400 |

Total | 57 | 59 | 42 | 9600 |

Item | Product A | Product B | Product C |
---|---|---|---|

Revenue p.u. | 90$ | 100$ | 70$ |

Material p.u. | 45$ | 40$ | 20$ |

Profit p.u. | 45$ | 60$ | 50$ |

Maximum sales | 100 | 40 | 60 |

This are the tables we have to consider. The goal is to maximize our profit and we want to know which products are the most profitable. We can read the first table as: If we want to produce ten apples we need 200 "units" of machine I (20 * 10) and 120 "units" of machine J and so on. Each machine has a limit of 2400 "units". These are constraints of our program as well as the maximum sales in the second table. We can only sell 100 apples, 40 bananas and 60 carrots. The goal is to maximize the profit. We get 45$ per unit of apples and so on.

This kind of problem is a linear programming problem, well actually it's a mixed integer program but at the moment we don't care about that. Let's just assume that we can have something like 5,3 apples so fractions of vegetables. In a future blog article we can think about how we can change that to get the best solution in the real world. And yeah sorry I said at the beginning we want to solve real world problems but we have to make one step after the other. See it as a teaser!

Whatever... A simple linear program looks like the following:

Maximize: `y`

Constraints:

x <= 2

y <= 6

y-x <= 5

y+x <= 6

and the variables must be `>=0`

so `x >= 0, y >= 0`

.

Let's try to get the solution by hand: We can set `y=6`

because it's clear that it can't be higher. Then we have a problem with constraint #3: `y-x <= 5`

so we have to set `x=1`

but then the last constraint fails. Therefore we can set `y+x=6`

and `y`

as high as possible to fulfill constraint #3 which gives us the solution `x=0.5`

and `y=5.5`

. The objective would be `5.5`

.

It's called linear program because the equations are all linear. It's not allowed to have constraints or optimization functions with the form `x*y`

.

The first challenge in real world applications is always to convert it into a mathematical form. We need variables, an optimization function and constraints.

Back to our business:

In our example it's pretty simple. We have three variables one for each of our vegetables. The objective function is to maximize the profit: `45*a+60*b+50*c`

where a=apples, b=bananas and c=carrots. Our constraints are defined by the maximum capacity and the maximum sales.

We have seven constraints for that one and three variables which make it a bit harder to visualize. Therefore we go back to our previous example:

Maximize: `y`

Constraints:

x <= 2

y <= 6

y-x <= 5

y+x <= 6

x >= 0

y >= 0

We can visualize that:

This diagram shows the different constraints and the overlap area where every constraint is fulfilled. The next one only visualizes the area where each of the constraints are fulfilled.

Now it's easily possible to get the maximum value for y which is 5.5. In this representation we see that the solution is a vertex of our green constraint surface. In fact this is always the case which is more or less the main idea of the simplex algorithm.

The principle of the simplex algorithm is to just have a look at the vertices of our surface.

What does this mean? It actually reduces our search space from infinitely many points inside the green area to five points. We can compute all five points and just keep the best solution in this case y=5.5.

For bigger problems there might be a lot of points on our surface so computing all of the vertices takes a long time as well. Can we do better?

Just assume for a moment that we found the point `p=(2,4)`

which is the red upper point on the right. Then it doesn't make any sense to consider the bottom point `z=(2,0)`

because the `y`

value which is our objective function is lower and we want to maximize it.

What do we have at the moment?

Consider only the vertices of our constraint polygon

Move only to a vertex which has a better objective

How to find the vertices?

Before we go further, I hope you are familiar with a linear system of equations because it's crucial for the simplex algorithm. A system looks like this:

```
x + y = 5
2x + y = 9
```

Which can be written in a matrix form as:

```
1 1 | 5
2 1 | 9
```

The next step is to use Gaussian elimination to get

```
1 1 | 5
0 0.5 | 0.5
```

This means `0.5y = 0.5 => y = 1`

. Therefore the first equation is `x+1 = 5 => x = 4`

.

Let's build our matrix for our constraints

```
1 0 | 2
0 1 | 6
-1 1 | 5
1 1 | 6
```

and we add a new row which is our objective row which is negated.

```
1 0 | 2
0 1 | 6
-1 1 | 5
1 1 | 6
--------
0 -1 | 0
```

The first row can be read as `x = 2`

but we want to have it `x<=2`

therefore we can add a slack variable `a`

and write it as: `x+a=2`

with `a >= 0`

. We do the same thing with every constraint we have which gives us this matrix:

x | y | a | b | c | d |
---|---|---|---|---|---|

1 | 0 | 1 | 0 | 0 | 0 |

0 | 1 | 0 | 1 | 0 | 0 |

-1 | 1 | 0 | 0 | 1 | 0 |

1 | 1 | 0 | 0 | 0 | 1 |

–- | –- | –- | –- | –- | –- |

0 | -1 | 0 | 0 | 0 | 0 |

As I mentioned before we don't have to have a look at all possibilities in our green surface. We just have to consider the vertices which are called basic feasible solutions (BFS).

A basic feasible solution in our matrix form is a solution where we have basic variables which are equal to the right hand side and non basic variables which are 0.

In our case we have a basic feasible solution of `a = 2, b = 6, c = 5, d = 6`

and `x = y = 0`

. This definitely fulfills our constraints because we don't need Gaussian elimination as we already have an identity matrix of size 4 in our matrix notation. This BFS represents our dot in the left hand corner `p = (0,0)`

.

**How to improve our objective?**

The last row shows us that we have an objective (obj) of `0`

and that `y`

has a negative value means that we can rise our objective by changing the value of `y`

. That means that we want to assign a value for `y`

and therefore it should be a basic variable. In return we need to make a basic variable non basic because we can only have 4 basic variables as we have 4 constraints.

We have to pick a basic variable which should leave (called leaving variable) and we already choose our entering variable `y`

. The corresponding row to the leaving variable should have a positive value in the entering column. What the hell is the corresponding row?

We have the basic variables `a,b,c,d`

. The corresponding rows are respectively row `1,2,3,4`

. In general the corresponding row is where the factor of the basic variable is 1. For example the factor of `a`

is `1`

in row number one.

Our positive values are:

To choose our leaving variable out of the three options we use the one with the lowest ratio of the right hand side divided by the blue value.

At the moment I just want to show you the basic concept that's why I will not explain why we choose it here. Ask me about it in the comments if you want to get more information about it. Our try it out by yourself first. :)

We now have our leaving variable which is `c`

because it corresponds the other way around with row number three. Our goal is to have a basic feasible solution again so we want to have a `1`

at the red spot and `0`

at the blue entries.

Therefore we are using Gaussian elimination again. Let's create a zero in our objective row. Therefore we use:

Which gives us the following matrix:

It directly gives us our new objective which is 5. To complete our elimination procedure we have to do the same with row 2 and row 4. This is our result and the blue values show the new basic.

Our basic variables are `y, a, b, d`

so we swapped `c`

with `y`

. Row #3 shows that `y`

is set to 5. `x`

is still not part of our basis so it is set to `0`

. We are now at point `p=(0,5)`

in our graphic.

If we are changing the value of `x`

we can increase the objective. We know that we can increase it because the value of the `x`

column in the objective row is negative.

We are doing the same thing again. Let's choose the leaving variable which would be `d`

because 1/2 is the smallest ratio we can get.

Our final matrix looks like this:

Our variables are now set to `x=0.5, y = 5.5, a = 1.5, b = 0.5`

.

Let's get back to our constraints:

x + a = 2

y + b = 6

-x + y + c = 5

x + y + d = 6

and check whether the equations are correct:

0.5 + 1.5 = 2

5.5 + 0.5 = 6

-0.5 + 5.5 + 0 = 5

0.5 + 5.5 + 0 = 6

Yeah! They are all true!

**Why is this the optimal value?**

In our obj row there are no negative values so we are not able to improve the obj. Can you explain why we only have to look at negative values in the obj row? Sure let's assume we want to have `d`

as our entering variable (yeah we had that one already but just try to do it). By doing the first Gaussian elimination step (changing the obj) we multiply one of the rows 1-4 with `-l/e`

where `l`

is the value in the obj row (here `l = 0.5`

) and `e`

is positive value with the lowest ration blablabla... Let's assume we can choose any `e`

we want (not 0 cause we can't divide by 0).

Then we would have the factors: `-0.5/(-0.5) = 1`

`-0.5/(-0.5) = 1`

`-0.5/0.5 = -1`

`-0.5/0.5 = -1`

We take the last two first: Here we would have an obj of `-1*5.5+5.5=0`

which is less than 5.5 or `-1*0.5+5.5 = 5`

which is less than 5.5 as well. Therefore it's not reasonable to consider these two.

Let's take the first one which looks promising: `1*1.5+5.5 = 7`

Looks good because our obj can be 7. Do you remember our constraint: `y <= 6`

? So that can't be correct but what would our matrix look like?

In the second step we would like to apply this elimination:

In the next step we have a negative value in our right hand side which means that `c=-1`

and that is not allowed because we have the constraint that every variable is `>=0`

.

Therefore we can only improve our obj if there is a negative value in our obj row.

**You want to see some code?**

Back to our vegetable business again:

```
from Model.model import Model
import numpy as np
from time import time
m = Model()
"""
A Manufacturing Example
Machine | Product A | Product B | Product C | Availability
I | 20 | 10 | 10 | 2400
J | 12 | 28 | 16 | 2400
K | 15 | 6 | 16 | 2400
L | 10 | 15 | 0 | 2400
Total | 57 | 59 | 42 | 9600
Item | Product A | Product B | Product C
Revenue p.u. | 90\$ | 100\$ | 70\$
Material p.u. | 45\$ | 40\$ | 20\$
Profit p.u. | 45\$ | 60\$ | 50\$
Maximum sales | 100 | 40 | 60
Maximize profit
"""
a = m.add_var("real+", name="a")
b = m.add_var("real+", name="b")
c = m.add_var("real+", name="c")
m.maximize(45*a+60*b+50*c)
m.add_constraint(20*a+10*b+10*c <= 2400)
m.add_constraint(12*a+28*b+16*c <= 2400)
m.add_constraint(15*a+6*b+16*c <= 2400)
m.add_constraint(10*a+15*b <= 2400)
m.add_constraint(a <= 100)
m.add_constraint(b <= 40)
m.add_constraint(c <= 60)
t0 = time()
m.solve()
print("Solved in %f" % (time()-t0))
m.print_solution()
```

This is the code where we explain the model to our program. We initialize variables, set the objective and define the constraints.

Let's begin with adding variables:

`a = m.add_var("real+", name="apple")`

Our class for the model looks like this:

```
class Model:
def __init__(self, print_obj=False):
self.constraints = []
self.variables = []
self.MINIMIZE = -1
self.MAXIMIZE = 1
if not print_obj:
self.p = {}
else:
self.p = print_obj
l = ['start_conf','leaving_entering']
for pl in l:
if pl not in self.p:
self.p[pl] = False
def add_var(self, ty, name="NA", value=None):
if ty == "real+":
x = RealNN(name,value,index=len(self.variables))
self.variables.append(x)
return x
```

First of all we initialize it without variables and constraints and we can define how to out print some stuff. In the `add_var`

function we have a type of the variable. At the moment we only use the type `real+`

for real positive values. A variable can have a name and get's an index.

We have a look at the `RealNN`

part in a second. Our next step is to describe our objective function:

`m.maximize(45*a+60*b+50*c)`

Normally you can't write a variable + a variable in Python that's why we have the `RealNN`

class.

```
from .constraint import Constraint
from .error import NonLinear
from .list_RealNN import List_RealNN
class RealNN:
def __init__(self,name=None,value=None,index=0,factor=1):
self.name = name
self.value = value
self.factor = factor
self.index = index
def __str__(self):
if self.value is None:
return "%s has no value" % self.name
else:
return "%s has value %.2f" % (self.name,self.value)
def __eq__(self, other):
return Constraint(self, "==", other)
def __le__(self, other):
return Constraint(self, "<=", other)
def __add__(self, other):
if self.value is not None and other.value is not None:
return RealNN(value=self.factor*self.value+other.factor*other.value)
else:
return List_RealNN(self,other)
def __radd__(self, other):
if other == 0:
return self
else:
return self.__add__(other)
def __neg__(self):
return RealNN(name=self.name,value=self.value,index=self.index,factor=-self.factor)
def __sub__(self, other):
if self.value is not None and other.value is not None:
return RealNN(value=self.factor*self.value-other.factor*other.value)
else:
return List_RealNN(self,-other)
def __rsub__(self, other):
if other == 0:
return self
else:
return self.__sub__(other)
def __rmul__(self, factor):
if isinstance(factor, (int, float, complex)):
return RealNN(self.name,self.value,index=self.index,factor=factor)
else:
raise NonLinear("factor %s is not linear" % factor)
def get_coefficients(self,l=False):
if not l:
l = 1
l_factor = [0]*l
l_factor[self.index] = self.factor
return l_factor
```

What it basically does is definining what to do if we write something like `apple+banana`

or if we write stuff like `3*apple`

and so on. It's a bigger concept on it's own and I can write an entire article about it but want to only describe the simplex algorithm in this one. If you want to hear more about this part => comment :)

Back to the Model part:

```
def maximize(self, obj):
self.obj_coefficients = obj.get_coefficients(len(self.variables))
self.obj_type = self.MAXIMIZE
def minimize(self, obj):
self.obj_coefficients = obj.get_coefficients(len(self.variables))
self.obj_type = self.MINIMIZE
```

We get the coefficients of our objective and define the type as `MINIMIZE`

or `MAXIMIZE`

. In this blog article I will only explain the maximize part. Minimization problems are a bit different. Let's go on with adding the constraints:

```
def add_constraint(self,constraint):
self.constraints.append(constraint)
```

Our constraints are:

```
m.add_constraint(20*a+10*b+10*c <= 2400)
m.add_constraint(12*a+28*b+16*c <= 2400)
m.add_constraint(15*a+6*b+16*c <= 2400)
m.add_constraint(10*a+15*b <= 2400)
m.add_constraint(a <= 100)
m.add_constraint(b <= 40)
m.add_constraint(c <= 60)
```

I'll show you how we stored the constraints and the variables later but you don't need to understand all the code behind it.

The matrix I showed you at the beginning is called tableau. We fill it with zeros first.

`self.tableau = np.zeros((len(self.constraints)+1,len(self.variables)+1))`

We fill the tableau with the real values now:

```
i = 0
for constraint in self.constraints:
coefficients = constraint.x.get_coefficients(len(self.variables))
self.tableau[i] = coefficients+[constraint.y]
i += 1
if self.obj_type == self.MAXIMIZE:
if constraint.type != "<=":
raise Unsolveable("Type %s isn't accepted" % constraint.type)
if self.obj_type == self.MINIMIZE:
if constraint.type != ">=":
raise Unsolveable("Type %s isn't accepted" % constraint.type)
```

The first part fills the tableau by using our constraint class. We can get the coefficient of our constraints as a list using `constraint.x.get_coefficients(len(self.variables))`

and `constraint.y`

holds the right hand side of our constraint. The second one tests whether we can solve it. It isn't allowed to use constraints like `x>=2`

in minimization problems or `x<=2`

in maximization problems at least for now ;)

The result is:

```
[[ 20. 10. 10. 2400.]
[ 12. 28. 16. 2400.]
[ 15. 6. 16. 2400.]
[ 10. 15. 0. 2400.]
[ 1. 0. 0. 100.]
[ 0. 1. 0. 40.]
[ 0. 0. 1. 60.]
[ 0. 0. 0. 0.]]
```

In the tableau the objective function is missing. We add it using:

```
# set obj
if self.obj_type == self.MINIMIZE:
self.tableau[-1,:] = np.append(np.array(self.obj_coefficients), np.zeros((1,1)))
elif self.obj_type == self.MAXIMIZE:
self.tableau[-1,:] = np.append(-np.array(self.obj_coefficients), np.zeros((1,1)))
```

After adding the objective function the tableau looks like this:

```
[[ 20. 10. 10. 2400.]
[ 12. 28. 16. 2400.]
[ 15. 6. 16. 2400.]
[ 10. 15. 0. 2400.]
[ 1. 0. 0. 100.]
[ 0. 1. 0. 40.]
[ 0. 0. 1. 60.]
[ -45. -60. -50. 0.]]
```

We can solve the problem now as explained above. At first the slack variables are missing and we don't have our basic feasible solution which is the same step.

How we do it is described in `self.find_bfs()`

.

```
def find_bfs(self):
self.redef_matrix_bs_obj()
self.row_to_var = [False for x in range(self.matrix.shape[0])]
# Build slack variables
identity = np.eye(self.matrix.shape[0])
identity = np.r_[identity, np.zeros((1,self.matrix.shape[0]))]
self.tableau = np.c_[self.tableau[:,:-1], identity, self.tableau[:,-1]]
self.redef_matrix_bs_obj()
# range not including the b column
# get all columns which have only one value => basis
for c in range(len(self.variables),self.matrix.shape[1]-1):
row = np.argwhere(self.matrix[:,c]!=0)[0][0]
self.row_to_var[row] = c
if self.p['start_conf']:
print("Start Tableau:")
print(self.tableau)
```

The idea of `self.redef_matrix_bs_obj()`

is that we have several parts in our tableau like the objective function the matrix (left hand side) and the right hand side.

```
def redef_matrix_bs_obj(self):
self.matrix = self.tableau[:-1]
self.bs = self.tableau[:-1,-1]
self.obj = self.tableau[-1,:]
```

Then we define which row corresponds with which variable as described in our smaller example. We fill them with `False`

first as we don't have a basic feasible solution yet.

`self.row_to_var = [False for x in range(self.matrix.shape[0])]`

Now we can add our slack variables:

```
identity = np.eye(self.matrix.shape[0])
identity = np.r_[identity, np.zeros((1,self.matrix.shape[0]))]
self.tableau = np.c_[self.tableau[:,:-1], identity, self.tableau[:,-1]]
```

Our slack variables are basically an identity matrix added to our tableau. The identity matrix itself has a row of zeros as the last row for our objective function. The last line combines the old tableau with the identity matrix. The new tableau:

```
[[ 20. 10. 10. 1. 0. 0. 0. 0. 0. 0. 2400.]
[ 12. 28. 16. 0. 1. 0. 0. 0. 0. 0. 2400.]
[ 15. 6. 16. 0. 0. 1. 0. 0. 0. 0. 2400.]
[ 10. 15. 0. 0. 0. 0. 1. 0. 0. 0. 2400.]
[ 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 100.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 40.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 60.]
[ -45. -60. -50. 0. 0. 0. 0. 0. 0. 0. 0.]]
```

We fill our corresponding variable array now:

```
# range not including the b column
# get all columns which have only one value => basis
row = 0
for c in range(len(self.variables),self.matrix.shape[1]-1):
self.row_to_var[row] = c
row += 1
```

The next step is to find an entering variable and a leaving variable. This step is called pivoting. As long as we find a negative value in the objective row we do this step over and over again.

```
solved = self.pivot()
steps = 1
while not solved:
solved = self.pivot()
steps += 1
```

At first we check whether we are optimal.

```
def pivot(self):
breaked = False
# check if the current tableau is optimal
# if optimal every value in obj is non negative
for c in range(self.matrix.shape[1]-1):
if c in self.row_to_var:
continue
if self.obj[c] < 0:
positive = np.where(self.matrix[:,c] > 0)[0]
if len(positive):
entering = c
l = np.argmin(self.bs[positive]/self.matrix[positive,c])
leaving_row = positive[l]
leaving = self.row_to_var[leaving_row]
breaked = True
break
if not breaked:
return True
```

An entering variable can't enter again therefore we can continue on those. If we have a negative value we try to find a positive value in that column to get our leaving variable. The entering variable is just the column we are testing right now. For the leaving variable we choose the minimum value of the right hand side divided by the positive value.

In our example:

```
A B C s1 s2 s3 s4 s5 s6 s7 b
0 [[ 20. 10. 10. 1. 0. 0. 0. 0. 0. 0. 2400.]
1 [ 12. 28. 16. 0. 1. 0. 0. 0. 0. 0. 2400.]
2 [ 15. 6. 16. 0. 0. 1. 0. 0. 0. 0. 2400.]
3 [ 10. 15. 0. 0. 0. 0. 1. 0. 0. 0. 2400.]
4 [ 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 100.]
5 [ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 40.]
6 [ 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 60.]
7 [ -45. -60. -50. 0. 0. 0. 0. 0. 0. 0. 0.]]
```

We begin with the first column and pick the positive values in that column. Therefore we can choose whether we pick 20,12,15,10 or 1 as our pivot element. To get the best pivot element we divide the right hand side by these values:

```
2400/20 = 120
2400/12 = 200
2400/15 = 160
2400/10 = 240
100/1 = 100
```

The minimum value is the last one which means that 1 is our pivot element and the corresponding variable is s5 because in row 4 the s5 column has the 1.

If we found a leaving and an entering variable we know that we aren't optimal otherwise we are optimal. In that case we return `True`

.

Our new entering variable is `A`

therefore we have to eliminate every element in that column except of in row 4. We use Gaussian elemination here. We start with the objective row:

```
fac = -self.tableau[-1,entering]/self.matrix[leaving_row,entering]
self.tableau[-1] = fac*self.matrix[leaving_row]+self.tableau[-1]
```

This gives us our new tableau and our objective of 4500.

```
[[ 20. 10. 10. 1. 0. 0. 0. 0. 0. 0. 2400.]
[ 12. 28. 16. 0. 1. 0. 0. 0. 0. 0. 2400.]
[ 15. 6. 16. 0. 0. 1. 0. 0. 0. 0. 2400.]
[ 10. 15. 0. 0. 0. 0. 1. 0. 0. 0. 2400.]
[ 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 100.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 40.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 60.]
[ 0. -60. -50. 0. 0. 0. 0. 45. 0. 0. 4500.]]
```

Now we do Gaussian elemination for the other rows as well and we want to have a `1`

in our new entering row because we always want to have an identity matrix in some way in our tableau.

```
# gausian elemination
for row in range(self.matrix.shape[0]):
if row != leaving_row:
fac = -self.matrix[row,entering]/self.matrix[leaving_row,entering]
self.matrix[row] = fac*self.matrix[leaving_row]+self.matrix[row]
self.matrix[leaving_row] /= self.matrix[leaving_row,entering]
```

The result is:

```
[[ 0. 10. 10. 1. 0. 0. 0. -20. 0. 0. 400.]
[ 0. 28. 16. 0. 1. 0. 0. -12. 0. 0. 1200.]
[ 0. 6. 16. 0. 0. 1. 0. -15. 0. 0. 900.]
[ 0. 15. 0. 0. 0. 0. 1. -10. 0. 0. 1400.]
[ 1. 0. 0. 0. 0. 0. 0. 1. 0. 0. 100.]
[ 0. 1. 0. 0. 0. 0. 0. 0. 1. 0. 40.]
[ 0. 0. 1. 0. 0. 0. 0. 0. 0. 1. 60.]
[ 0. -60. -50. 0. 0. 0. 0. 45. 0. 0. 4500.]]
```

The interpretation of this tableau is simply that if we only produce apples we can have profit of 4500 and to get this value we have to produce 100 apples. (see right hand side)

We define our new basis using:

```
def new_basis(self,entering,leaving):
for row in range(self.matrix.shape[0]):
if self.row_to_var[row] == leaving:
self.row_to_var[row] = entering
break
```

Which is called using `self.new_basis(entering, leaving)`

.

In the end we have this tableau:

```
[[ 0 1 0 -0 0 0 0 0 0 -0.5 16.4]
[ 0 0 1 0 0 0 0 0 0 1 60 ]
[ 0 0 0 -0.8 0.1 1 0 0 0 -9.2 114.5]
[ 0 0 0 -0.2 -0.5 0 1 0 0 9.5 1336.4]
[ 1 0 0 0.1 -0 0 0 0 0 -0.3 81.8]
[ 0 0 0 -0.1 0 0 0 1 0 0.3 18.2]
[ 0 0 0 0 -0 0 0 0 1 0.5 23.6]
[ 0 0 0 1.2 1.7 0 0 0 0 10.5 7663.6]]
```

Therefore our result in human readable form is the following: We can produce 81.8 apples, 16.4 bananas and 60 carrots. Our profit is 7663.6$. Let's have a look at our description of the problem again.

Machine | Product A | Product B | Product C | Availability |
---|---|---|---|---|

I | 20 | 10 | 10 | 2400 |

J | 12 | 28 | 16 | 2400 |

K | 15 | 6 | 16 | 2400 |

L | 10 | 15 | 0 | 2400 |

Total | 57 | 59 | 42 | 9600 |

and a closer look to our machines:

```
20*81.8+10*16.4+10*60 = 2400
12*81.8+28*16.4+16*60 = 2400.8
15*81.8+6 *16.4+16*60 = 2285.4
10*81.8+15*16.4 = 1064
```

First of all the second line looks like we did something wrong because we had the constraint that every machine has only a capacity of 2400. Actually I rounded the values for the amount of apples and bananas. More accurate values are 81.818182 apples and 16.363636 bananas which would give us 2399.999992. Even these values are cropped which we can see more easily in our tableau on the right hand side.

```
16.4
60
114.5
1336.4
81.8
18.2
23.6
7663.6
```

We can use our `row_to_var`

array to find the corresponding variable for each of these values.

```
16.4 bananas
60 carrots
114.5 slack 3
1336.4 slack 4
81.8 apples
18.2 slack 5
23.6 slack 6
7663.6 our profit
```

I explained the meaning of apples,bananas and carrots already but what is the meaning of those slack variables? The slack variables 3,4,5 and 6 corresponds to the constraints 3,4,5 and 6.

```
m.add_constraint(15*a+6*b+16*c <= 2400)
m.add_constraint(10*a+15*b <= 2400)
m.add_constraint(a <= 100)
m.add_constraint(b <= 40)
```

All other slack variables are 0. That means the machines I and J are running on capacity. The last machine L has a lot of unused capacity which might be interesting for our business. We aren't allowed to produce more carrots (60) but we are allowed to produce more apples and bananas. This means if we are allowed to produce more carrots we might be able to increase our profit (for example finding new markets where we can sell carrots). On the other hand if we are able to increase the productivity of our first two machines we would be able to produces more apples and bananas.

I hoped you enjoyed the first part of the simplex algorithm and might be able to use it in your real life business ;)

As mentioned during the article there are different parts which might need a second look like how to use this for solving minimization problems.

Tell me what you are thinking about it and what kind of articles you would like to read in the future.

As always you find the code on Github

A new article about the simplex algorithm: Convert to standard form

If you're interested in another series check out my posts on constraint programming: Solving Sudokus again but also much more, faster and in depth: Constraint Programming Solver from Scratch.

If you've enjoyed this and some other posts on my blog I would appreciate a small donation either via PayPal or Patreon whereas the latter is a monthly donation you can choose PayPal for a one time thing. For the monthly donation you'll be able to read a new post earlier than everyone else and it's easier to stay in contact.

Want to be updated? Consider subscribing and receiving a mail whenever a new post comes out.

© Ole Kröger. Last modified: February 27, 2022. Website built with Franklin.jl.