Creation date: 2020-03-09
This is an ongoing series about: How to build a constraint solver? It's part 18 of the series so we have came a long way. In the last weeks you might have seen that I implemented a lot of stuff in between. Since a few days ago I have a bigger project now: Visualizing the data we have about the new coronavirus COVID-19. I've made a blog post about it Visualizing COVID-19 but there are a lot of things to do to make it really useful. It was just a fast way of trying stuff out. I want to make it interactive and scale by population/population density/area as well as getting an idea of the likeliness to get infected in each country by combining those data.
Nevertheless I want to explain a major change in the ConstraintSolver in this post.
Last time I implemented the option to optimize on linear objectives. The problem is that I had only a very simple way of computing bounds. That means if we have the following problem:
m = Model(CS.Optimizer)
@variable(m, 0 <= x[1:10] <= 15, Int)
@constraint(m, sum(x) <= 15)
@objective(m, Max, sum(x))
and run optimize!(m)
. It takes a crazy amount of time to proof that the optimal solution is 15. What I did so far was that the bound computation only works with the variables. That means we have a look at the maximum value of each variable and add them up to obtain a bound. Here it will be 150
. With the additional constraint sum(x) <= 15
. We can immediately reduce that to 15 but well the ConstraintSolver isn't that intelligent.
My first idea was to use some knapsack heuristic for this and started programming and realized way too late that it's not working for negative coefficients or negative values.
Then I tried to only use it for constraints where it actually works like the one above and someone opened an issue issue #83. It's quite a good problem to see the next issue my approach had.
I'll show the problem in a minute. First I want to explain the approach I had in a bit more detail. Every constraint had a function which had as an input the objective function and then I computed with the knapsack heuristic a bound based on each constraint and objective separately and used the tightest bound.
The problem is of course that I used each constraint individually instead of combining them:
Code from the mentioned issue:
using JuMP
using ConstraintSolver
const CS = ConstraintSolver
model = Model()
# Variables
@variable(model, 0 <= inclusion[h = 1:3] <= 1, Int);
@variable(model, 0 <= allocations[h = 1:3, a = 1:3] <= 1, Int);
@variable(model, 0 <= days[h = 1:3, a = 1:3] <= 5, Int);
# Constraints
@constraint(model, must_include[h = 1:3],
sum(allocations[h,a] for a in 1:3) <= inclusion[h]
);
# at least n
@constraint(model, min_hospitals,
sum(inclusion[h] for h in 1:3) >= 3
);
# every h must be allocated at most one a
@constraint(model, must_visit[h = 1:3],
sum(allocations[h,a] for a in 1:3) <= 1
);
# every allocated h must have fewer than 5 days of visits per week
@constraint(model, max_visits[h = 1:3],
sum(days[h, a] for a in 1:3) <= 5 * inclusion[h]
);
benefit = @expression(model,
sum(days[h,a] * 5
for h in 1:3, a in 1:3));
@objective(model, Max, benefit);
set_optimizer(model, CS.Optimizer)
optimize!(model)
The objective is a summation over the days
variable which is directly limited by this:
@constraint(model, max_visits[h = 1:3],
sum(days[h, a] for a in 1:3) <= 5 * inclusion[h]
);
inclusion
is a binary variable so we can definitely limit the sum by 5
. Now we have three constraints with three day variables each. The sum is limited by 5 which means that the benefit
is limited by \(5\cdot 5 + 5\cdot 5 + 5\cdot 5 = 75\). But my code had a look at each constraint separately and then used the basic max
of each variable approximation for the "unconstrained" variables. "Unconstrained" by the individual constraint.
This ended up to create this bound \(5\cdot 5 + 5\cdot (5 \cdot 6) = 175\). Because each day variable itself is bounded by 5.
Therefore I tried to combine constraints together to compute better bounds but I realized that this coupled with the fact that my heuristic only works for positive values and positive coefficients I came to the conclusion that it's probably better to go with something that exists already.
I basically need to solve a linear problem to compute the bounds and instead of building a simplex solver from scratch I decided to use JuMP and already available simplex solvers. If you want to know how a simplex solver works you can read my posts on it How to build a simplex solver?.
The fact that I build a solver for it already from scratch also helped me to decide that this time I will use a solver that is working. Nevertheless I might come back to build such a solver again in Julia as the one I've built is in Python.
Kinda happy with that decision I decided to make it work. The things I had to do aren't that many:
Add a new solver options
Create the basic JuMP model for LP
Set the bounds in the LP model based on the once in the main constraint solver
Solve the LP model and use the objective as the bound
We need a bit of an idea how JuMP and MOI work for creating the LP model.
I just want to show a very small part of it:
lp_backend = backend(lp_model)
# iterate through all constraints and add all supported constraints
for constraint in com.constraints
if MOI.supports_constraint(model.options.lp_optimizer.optimizer_constructor(), typeof(constraint.fct), typeof(constraint.set))
MOI.add_constraint(lp_backend, constraint.fct, constraint.set)
end
end
In that part I iterate over all constraints and check whether the lp solver supports the constraint if yes I add it to the lp model. This means that in the future it would be very easy to also support non linear constraints this way if I support them in the constraint solver. It makes sure that we don't add a constraint which is not supported by the lp solver for example the all different
constraint.
We need to change the optimizer in our previous example:
- set_optimizer(model, ConstraintSolver.Optimizer)
+ cbc_optimizer = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0)
+ set_optimizer(model, optimizer_with_attributes(
+ CS.Optimizer,
+ "lp_optimizer" => cbc_optimizer,
+ ))
add need a using Cbc
. Cbc is a LP solver which works nicely with JuMP. You could also use GLPK or Gurobi.
This now solves the problem in less than a second instead of using all your memory or take at least half an hour which of course is totally unacceptable for this small problem.
First I thought for a couple of hours probably that this solution is good and I can finally don't have to worry about this bound computation again. I added this problem to my test cases:
cbc_optimizer = optimizer_with_attributes(Cbc.Optimizer, "logLevel" => 0)
model = Model(optimizer_with_attributes(
CS.Optimizer,
"lp_optimizer" => cbc_optimizer,
"logging" => [],
))
# Variables
@variable(model, 1 <= x[1:10] <= 15, Int)
# Constraints
@constraint(model, sum(x[1:5]) >= 10)
@constraint(model, sum(x[6:10]) <= 15)
@constraint(model, x in CS.AllDifferentSet(10))
@objective(model, Max, sum(x))
optimize!(model)
and again it took way too long to solve it. I think I never was patient enough to wait that long.
In my mind I simply computed the optimal solution by setting x[6:10]
to 1,2,3,4,5 which adds up to 15 and x[1:5]
to 15,14,13,12,11 which adds up to 65.
But the LP solver has no knowledge about the alldifferent solver and the alldifferent solver just limits the variables a little bit in the backtracking part.
Therefore each constraint is now able to add a constraint to the lp model and update the constraint before the lp model gets solved.
This means the all different constraint adds a linear constraint of the form:
@constraint(lp_model, sum(x) == y)
and y
gets bounded every time. One way of limiting y which I currently use is the following:
I sort the maximum values in ascending order which might result in [15,15,15,15,15,...]
. Because we have the all different constraint the maximum value of the sum is \(15+14+13+12+11\) instead of \(15+15+15+15+15\).
With this simple trick the problem can be solved in a short time frame as well.
Next up for the constraint solver:
Support constraints x+y != 12
Other strategies to steer the search
Traversal strategy
Branching strategy
Support for MiniZinc models to get more test cases
etc
I hope you enjoyed the post!
Thanks for reading and special thanks to my five patrons!
List of patrons paying more than 4$ per month:
Site Wang
Gurvesh Sanghera
Szymon Bęczkowski
Currently I get 18$ per Month using Patreon and PayPal.
For a donation of a single dollar per month you get early access to the posts. Try it out at the start of a month and if you don't enjoy it just cancel your subscription before pay day (end of month).
I'll keep you updated on Twitter OpenSourcES as well as my more personal one: Twitter Wikunia_de