Simplex: Dual problem


Creation date: 2017-08-16

Tags: simplex, python, constraint-programming, linear-programming, optimization

In the last couple of weeks we learned a lot about the Simplex algorithm. I hope you enjoyed this series so far. Leave some comments if you're interested in some special articles.

I have at least one more article in my mind about this topic. Now let's dive into this short article. In the last article I explained how to solve all different kinds of linear problems.

If you missed the start of this series: Simplex: Solving linear problems.

Both of these articles are about solving the problems using Python. How do you solve these problems by hand?

Really? Who solves this by hand? Well, good point... but you'll be able to enhance your understanding by this. I promise!

Let's have a look back at our diet problem:

The simple version:

\[ \begin{array}{r} \text{min } & 25x_1 & + & 130x_2 & + & 85x_3 & + & 70x_4 & + & 95x_5 & + & 98x_6 \\ & 110x_1 & + & 205x_2 & + & 160x_3 & + & 160x_4 & + & 420x_5 & + & 260x_6 & \geq & 2000 \\ & 4x_1 & + & 32x_2 & + & 13x_3 & + & 8x_4 & + & 4x_5 & + & 14x_6 & \geq & 55 \\ & 2x_1 & + & 12x_2 & + & 54x_3 & + & 285x_4 & + & 22x_5 & + & 80x_6 & \geq & 800 \\ \end{array} \]

We solved it last time and the objective was around 541. If we have a look on the second constraint and compare it with the objective function we can see that the objective which we want to minimize has higher coefficents for each variable than in the second constraint. Therefore, the objective must be higher than 55. Any attempt to lower it under that value breaks the second constraint. Let's multiply the second constraint by 4:

\[ \begin{array}{r} \text{min } & 25x_1 & + & 130x_2 & + & 85x_3 & + & 70x_4 & + & 95x_5 & + & 98x_6 \\ & 110x_1 & + & 205x_2 & + & 160x_3 & + & 160x_4 & + & 420x_5 & + & 260x_6 & \geq & 2000 \\ & 16x_1 & + & 128x_2 & + & 52x_3 & + & 32x_4 & + & 16x_5 & + & 56x_6 & \geq & 220 \\ & 2x_1 & + & 12x_2 & + & 54x_3 & + & 285x_4 & + & 22x_5 & + & 80x_6 & \geq & 800 \\ \end{array} \]

We can see that the argument is still true so we have a better lower bound for our objective: 220. Let's improve it further by adding the third constraint divided by 8 on top.

\[ \begin{array}{r} \text{min } & 25x_1 & + & 130x_2 & + & 85x_3 & + & 70x_4 & + & 95x_5 & + & 98x_6 \\ & 110x_1 & + & 205x_2 & + & 160x_3 & + & 160x_4 & + & 420x_5 & + & 260x_6 & \geq & 2000 \\ & 16.25x_1 & + & 129.5x_2 & + & 58.75x_3 & + & 67.625x_4 & + & 18.75x_5 & + & 66x_6 & \geq & 320 \\ & 2x_1 & + & 12x_2 & + & 54x_3 & + & 285x_4 & + & 22x_5 & + & 80x_6 & \geq & 800 \\ \end{array} \]

Maybe you already see what we are doing here. We try to combine our three constraints in a way that it will be always smaller or equal to the objective function. Therefore we can rewrite our initial problem as the following problem. This problem is called the Dual problem.

\[ \begin{array}{r} \text{max } & 2000y_1 & + & 55y_2 & + & 800y_3\\ & 110y_1 & + & 4y_2 & + & 2y_3 & \leq & 25 \\ & 205y_1 & + & 32y_2 & + & 12y_3 & \leq & 130 \\ & 160y_1 & + & 13y_2 & + & 54y_3 & \leq & 85 \\ & 160y_1 & + & 8y_2 & + & 285y_3 & \leq & 70 \\ & 420y_1 & + & 4y_2 & + & 22y_3 & \leq & 95 \\ & 260y_1 & + & 14y_2 & + & 80y_3 & \leq & 98 \\ \end{array} \]

What we did in our first step was to set \(y_1 = 0, y_2 = 1, y_3 = 0\). This fulfills all of the constraints and has an objective function of 55. These problem will give us the objective of our initial problem. That's somehow awesome I think. Well how do we get our values for \(x_1,x_2,\dots,x_6\) or in this case: How many oats should we eat???

That's the real awesome stuff... We just get them for free.

The tableau we got last time was (rounded to 4 decimals)

\[ \begin{array}{rrrrrrrrrr|r} x_1 &x_2 & x_3 & x_4 & x_5 & x_6 & s_1 & s_2 & s_3 & b \\ \hline 1 & 9.8729 & 3.4232 & 0 & 0 & 3.279 & 0.0027 & -0.3289 & 0.0077 & 6.4712 \\ 0 & 0.1388 & 0.2115 & 1 & 0 & 0.2846 & 0.0002 & -0.0045 & -0.0035 & 2.6013 \\ 0 & -2.1505 & -0.5962 & 0 & 1 & -0.3481 & -0.0032 & 0.0878 & -0.0007 & 2.0761 \\ \hline 0 & 77.7619 & 41.2523 & 0 & 0 & 29.1791 & 0.2182 & 0.1904 & 0.1178 & -541.1017 \\ \end{array} \]

We get the optimal values for \(x_1,\dots,x_6\) by having a look at the right hand side and which columns have only a one and zeros everywhere else. Now we can have a look on the objective row which gives us some additional information. Actually the last four values are interesting now. The simple part: 541 is our objective.

Hope you're ready for the really awesome stuff...

\[ 0.2182*2000+0.1904*55+0.1178*800 = 541.112 \]

Are you amazed or not?

Probably not :D Well this is nearly our objective (just rounding errors). That's ok but not really amazing. The amazing part is that these are our values for \(y_1,y_2,y_3\). Does this seem amazing enough? Okay we looked for the values of \(x_1,\dots,x_6\) in our dual problem, right?

Now after solving the primal, that's the name of our intial problem (like the normal one), we got the solution of the dual. We wanted to have the solution of the primal after solving the dual.

What did we actually do to get the dual problem? We transposed our tableau matrix! And we changed from minimzation to maximization. If we have a maximization problem at the beginning we would change to a minimzation problem.

Transposing a matrix twice will give us the normal matrix back. Therefore, we can transpose the dual matrix back into the primal one. What I want to say is that the dual of the dual is the primal. If we can get the solution of the dual by solving the primal we can also get the solution of the primal by solving the dual.

Why did we even do all of this creepy stuff and what is the advantage? In the last post I explained that we have a standard type of problem and that minimzation is kind of abnormal. We had to convert the problem into a maximization problem. Then it was hard to find a basic feasible solution and all that takes time.

Using the dual problem we can convert a minimzation problem more easily into a maximization problem just by transposing the matrix.

Wow! Well, wait a second... Why just why did I read the last post and why do I ever have to do the crap with converting a minimization the hard way into a maximization one?

Good question. In the last post I described not only how to get from a minimization problem to the standard form but also how to be able to use \(\leq\) and \(\geq\) in the same problem or what we do with a negative right hand side. It also isn't always the best to transpose the matrix. This depends on how many variables and how many constraints we have. If the number of variables in our diet problem far exceeds the number of constraints then it might be better to use the previous method. In different words if we have way more ingredients than nutrient constraints we maybe don't want to solve the dual.

By solving the problem as a dual in that case we would increase the constraints. Having more constraints is harder than having more variables.

This time I'll not include any code in this blog post because the steps were pretty simple to make. Have a look at the repository to get the newest version. I'm trying to improve that script nearly every day at the moment.

Next post: How to add constraints.

If you've enjoyed this and some other posts on my blog I would appreciate a small donation either via PayPal or Patreon whereas the latter is a monthly donation you can choose PayPal for a one time thing. For the monthly donation you'll be able to read a new post earlier than everyone else and it's easier to stay in contact.



Want to be updated? Consider subscribing on Patreon for free
Subscribe to RSS