Series: Constraint Solver in Julia

- Part One: Setup and backtracking
- Part Two: Pruning
- Part Three: Pruning+Benchmarking
- Part Four: Alldifferent constraint
- Part Five: Better data structure
- Part Six: Sum constraint first part
- Part Seven: Sum constraint speed-up

This is part 8 of the series about: How to build a Constraint Programming solver?

I published 7 posts on this series which started last month hope to work on it further for a while. Either in the next part or part 10 I think I'll do a recap post which goes through all the parts we've done again. Maybe a recap video which will show the code. Not sure yet. If you have some ideas please comment!

**Done:** Recap video and post

Besides that I yesterday saw a tweet by Juan Pablo Vielma who is an Associate Professor of Operations Research at MIT and is active in the Julia community as well.

Congratulations to Google OR-tools’ CP-SAT solver on winning 4 gold medals in the 2019 @minizinc challenge! #cp2019 #orms pic.twitter.com/X2pyBHqy72

— Juan Pablo Vielma (@J_P_Vielma) October 2, 2019

He also mentioned in a comment that he is currently looking into integrating Google OR-tools into JuMP.jl which is the modeling language for mathematical optimization in Julia.

That would be very interesting. Nevertheless I will of course keep working on this as this is for educational reasons (also like how not do some things :D).

Maybe it's also a good time to mention here why I'm writing these posts the way I do again:

- I think reading blogs where everything works perfectly because the blogger is a specialist in his career doesn't get you far, similar to why Veritasium asked people questions in his videos not just showing the answer.
- Explaining while you're still on a similar level as your reader is far easier as explained also by Wait But Why? who is a layman in the topics before writing about them
- I'm writing better code if I have the feeling that I'm watched :D I find bugs before I publish (and after :D ) after thinking about it because I have to write it down in a reasonable manner.
- You should not take everything you read as if it is the truth. Definitely not on my blog as there are probably way better methods to write a simplex solver or now a constraint programming solver
- Hopefully I can bring you to the stage where you try out coding projects like this by yourself. Often in university or school you often start with: Write a fibonacci or prime function but sometimes it doesn't get much more difficult than that. Try out some real problems and make them open source!

### Okay so what is this post about?

Sometimes you start like I did with the sudoku puzzle and then want to generalize more and sometimes don't quite get your actual goal of the project besides enjoying programming and writing about it ...

I introduced the sum constraint and realized that we now need a right hand side (`rhs`

) which we don't need for the `alldifferent`

constraint. Therefore it defaults to `0`

. Then later we will have constraints like `a == b`

or `a != b`

where `a`

and `b`

are variables.
We would make a new constraint for `equal`

and `not_equal`

or something and would write it like:

`CS.add_constraint!(com, CS.equal, a; rhs=b)`

maybe but remember that `rhs`

is an integer now so we would need to make it `Union{Integer,Variable}`

or I don't know. It seems like this isn't the way to do.

What would be the ideal way of writing it as the user?

Probably similar to the way it's done it JuMP.jl:

```
CS.add_constraint!(com, a == b)
CS.add_constraint!(com, a != b)
CS.add_constraint!(com, all_different([a,b]))
CS.add_constraint!(com, sum([a,b]) == 5)
```

and a current limitation is that we can't have coefficients for a sum constraint which seems to be kind of strict to me as it shouldn't be too complicated to include it into the current implementation. Such that:

`CS.add_constraint!(com, 5*a-2*b == 5)`

is possible as well and

`CS.add_constraint!(com, 5*a-2*b-5+b==c)`

can be transformed as well but the user shouldn't need to care about it.

Now each constraint needs an `id`

and `indices`

such that it gets called if a variable gets changed that the constraint subscribed to.

Each constraint can produce the same output of what changed but different input information.

**Later:** I think it might be reasonable to save exactly what changed such that we can have a way to reproduce everything in a tree like structure such that optimization later becomes easier i.e. find the sudoku solution where some values add up to the lowest value.
For now we only care about the first solution and don't optimize anything (besides performance ;) )

### What do we have to do?

The `add_constraint!`

function will only get two inputs now. The model and a constraint of some type.
We will have different constraint types for `all_different`

, `sum`

, `equal`

or `not_equal`

. I would call the `sum`

constraint type `LinearConstraint`

as we have some kind of linear combination. It will have a `rhs`

as well as later an operator to support \(\leq\) and \(\geq\) as well.

They all should be a constraint type so we need some type of inheritance. Julia uses so called abstract types for that.

```
abstract type Constraint
end
```

It doesn't really hold any information but other types can say they are a subtype and we can write `constraint::Constraint`

instead of `constraint::Union{AllDifferentConstraint,LinearConstraint,DirectConstraint}`

or something like that. We could of course just use `constraint`

instead of any type but it is reasonable to specify what this function expects as an input.
Now `AllDifferentConstraint`

is actually the most basic constraint as it only holds the `idx`

, `fct`

, `indices`

and `pvals`

which I think we need for all constraints. Therefore I call it `BasicConstraint`

.

That is easier than the sum constraint so we start with it:

### BasicConstraint

```
abstract type Constraint
end
mutable struct BasicConstraint <: Constraint
idx :: Int
fct :: Function
indices :: Vector{Int}
pvals :: Vector{Int}
BasicConstraint() = new()
end
```

We need the `BasicConstraint() = new()`

as we don't know the index at the beginning when we create it. This has to be filled by the `ConstraintSolver.jl`

but the others will be filled in the corresponding function file like `all_different.jl`

.
Same is true for `pvals`

.

I also exclude `eq_sum.jl`

for now and use my `current.jl`

function for testing.

At the moment we have a `create_constraint`

function which fills `pvals`

but maybe we should just call it like that similar as we do with `set_in_all_different!`

. So we create `set_pvals!(com::CS.CoM, constraint::Constraint)`

Then we have a `add_constraint!`

function:

```
function add_constraint!(com::CS.CoM, constraint::Constraint)
constraint.idx = length(com.constraints)+1
push!(com.constraints, constraint)
set_pvals!(constraint)
for (i,ind) in enumerate(constraint.indices)
push!(com.subscription[ind], constraint.idx)
end
end
```

Very basic so far. Now it doesn't work of course. We need to build a constraint. Easy thing first: Reformulate the constraint from the user side:

This were the constraints before:

```
function add_sudoku_constr!(com, grid)
for rc=1:9
#row
CS.add_constraint!(com, CS.all_different, grid[CartesianIndices((rc:rc,1:9))])
#col
CS.add_constraint!(com, CS.all_different, grid[CartesianIndices((1:9,rc:rc))])
end
for br=0:2
for bc=0:2
CS.add_constraint!(com, CS.all_different, grid[CartesianIndices((br*3+1:(br+1)*3,bc*3+1:(bc+1)*3))])
end
end
end
```

Now it looks like this:

```
function add_sudoku_constr!(com, grid)
for rc=1:9
#row
variables = grid[CartesianIndices((rc:rc,1:9))]
CS.add_constraint!(com, CS.all_different([variables...]))
#col
variables = grid[CartesianIndices((1:9,rc:rc))]
CS.add_constraint!(com, CS.all_different([variables...]))
end
for br=0:2
for bc=0:2
variables = grid[CartesianIndices((br*3+1:(br+1)*3,bc*3+1:(bc+1)*3))]
CS.add_constraint!(com, CS.all_different([variables...]))
end
end
end
```

I want to have a vector of variables but the uses stores it in a 2D array so `[variables...]`

flattens it to a vector.

which is not defined yet.

Inside `all_different.jl`

:

```
function all_different(variables::Vector{Variable})
constraint = BasicConstraint()
constraint.fct = all_different
constraint.indices = Int[v.idx for v in variables]
return constraint
end
```

Now let's try it out:

```
include("test/current.jl)
main(;benchmark=false)
```

Okay first problem:

`ERROR: type BasicConstraint has no field in_all_different`

Well it makes sense that the `all_different`

constraint doesn't have that so we add a constraint that only if the constraint type has the field `in_all_different`

we make the check.

`if :in_all_different in fieldnames(typeof(constraint))`

If we only work in Julia 1.2 we could use `hasfield`

but I want to support earlier versions for now.

That's it for `BasicConstraint`

. We will later make a benchmark check whether this is slower somehow.

### LinearConstraint

The idea is that we have a function for `+`

between two or more variables and later support coefficients as well such that we can have constraints like `2x+4y == 7`

. That will be saved as a `LinearConstraint`

and the first part (before `==`

) is of type `LinearVariables`

.

```
mutable struct LinearVariables
indices :: Vector{Int}
coeffs :: Vector{Int}
end
```

This just saves the id of each variable and the corresponding coefficient. I decided to only support integers. The full constraint type:

```
mutable struct LinearConstraint <: Constraint
idx :: Int
fct :: Function
indices :: Vector{Int}
pvals :: Vector{Int}
coeffs :: Vector{Int}
operator :: Symbol
rhs :: Int
in_all_different :: Bool
LinearConstraint() = new()
end
```

The operator here is `:(==)`

for the equal case.

I've created a `linearcombination.jl`

which holds the function to create `LinearVariables`

:

Most basic one is:

```
function Base.:+(x::Variable, y::Variable)
return LinearVariables([x.idx,y.idx],[1,1])
end
```

That way you can define a `+`

function in Julia for your own types.
This returns a `LinearVariables`

such that we also need a function to sum an already existing `LinearVariables`

with a variable:

```
function Base.:+(x::LinearVariables, y::Variable)
lv = LinearVariables(x.indices, x.coeffs)
push!(lv.indices, y.idx)
push!(lv.coeffs, 1)
return lv
end
```

You probably get the idea. We also need a function to create a `LinearVariables`

when we have `2*x`

which is done analogously and a function which sums two those types directly.

Then in the `eq_sum.jl`

file we actually have to build the constraint so combine it with `== RHS`

:

```
function Base.:(==)(x::LinearVariables, y::Int)
lc = LinearConstraint()
lc.fct = eq_sum
lc.indices = x.indices
lc.coeffs = x.coeffs
lc.operator = :(==)
lc.rhs = y
return lc
end
```

and I changed:

`function eq_sum(com::CS.CoM, constraint::Constraint; logs = true)`

to

`function eq_sum(com::CS.CoM, constraint::LinearConstraint; logs = true)`

as well as the formulation of `add_constraint!`

in all test cases i.e

`CS.add_constraint!(com, sum([com_grid[CartesianIndex(ind)] for ind in s.indices]) == s.result)`

Now we have only a support for coefficients that equal `1`

which is the reason why I added:

```
if any(c->c!=1, constraint.coeffs)
throw(ArgumentError("Currently only a coefficient of 1 is possible in sum constraints"))
end
```

Okay maybe we should just add that functionality :D

Wait a second I kinda made a mistake in the sum constraint :(

### Bugfix of sum constraint

I wrote:

```
if maxs[i] < pre_maxs[i]
still_feasible, nremoved = remove_below!(com, search_space[idx], maxs[i])
```

which reads to a person who doesn't know the code before: If the maximum is lower than previously we remove values below the new maximum which of course is garbage.

It has to be: above the new maximum but I've somehow swapped maximum and minimum before therefore it worked. Weird!

Sorry for the small update here:

Let's check it again: We have `[1,2] + [5,9] == 6`

this should result in `[1]+[5]`

so:

We have to put the rhs to the left: `[1,2] + [5,9] - 6 == 0`

. Then we compute the full minimum and the full maximum:

```
full_min = 1+5-6 == 0
full_max = 2+9-6 == 5
```

Now we remove our current minimum (minimum of the first variable) from the overall minimum and get: `0-1 = -1`

. Then we check if this is feasible and it is because using our range from `[1,2]`

we can raise it to `0`

or higher. Important is that we can raise it to 0!

The next check is: What is the maximum we can reach without the current index.

That is \(5-9 = -4\) which makes sense as the maximum `2`

minus the rhs `6`

is \(-4\).

If this value plus our maximum is lower than 0. We can't reach the sum even if we use all our max values.

That's the stuff about the feasibility check I've did so far but we don't have to do that every time as it is actually just a check whether full max is \(\geq 0\) and full min is \(\leq 0\). Therefore we only need to compute it once. I really didn't think that through.

Okay to the actual thing: We want to prune values...

We have `c_min = 0-1 = -1`

and `c_max = 5-2 = 3`

now actually the `c_min`

gives us information about the maximum value we can use for the current index which is just `--1 = 1`

because if we go higher than that we can use all minimal values afterwards but still can't reach our goal low value 0.

This means we can remove values above 1.

To the second part we look at variable `#2`

:

`c_min = 0-5 = -5`

and `c_max = 5-9 = -4`

`c_max`

means that the maximum we can achieve without using the variable is \(-4\) so the minimum of the variable is \(4\) but we already know that is bigger than \(5\). On the other hand the `c_min`

gives us again that the new maximum is \(5\) which helps.

This fixes everything to `1+5 == 6`

.

Now it seems to make sense. Sorry for that.

### Coefficients

The simple idea is to work with the multiplied variants first so multiply it in the max value and in the min value sums and later divide it.

Whenever we have something like `7*x + y == 50`

where y is between 1 and 5 for example we would get `y = 1`

and `7*x = 49`

which means that we now have to divide 49 by 7 to get x. Same is true for `remove_below`

and `remove_above`

and also needs to be done in the feasibility check function of `eq_sum`

.

We also have to check in our `simplify`

function that the coefficients are 1 because otherwise we can't combine our all different constraint with the sum constraint.

Wait a second we need a talk a bit about negative coefficients ;)

So if a coefficient is negative we need to compute min and max the other way around and we need to swap `remove_below`

with `remove_above`

.

I've added some tests for this and also tests for using negative variables to be sure that it works as well.

### What else?

It's October which means you can win a Hacktoberfest T-Shirt by digitalocean here.

All you have to do is create four pull requests on GitHub. I would of course appreciate it if you help me with this project if you found a bug or want to add a feature. You can of course contribute to other open source projects ;)

I've created a Patreon page a while ago and I'm very happy to have TWO patrons already!

It costs 1$ per month for getting early access to my new posts. Currently it is 2 days before. If you enjoy this series and want to be up to date consider subscribing on Patreon. If you think it's not worth it you can drop off before the end of this month and you don't pay anything.

### Next:

- If we write x+2x we should combine it to 3x
- If there is a variable or a whole linear combination as rhs
- move it to the left such that it always is in standard notation

Next post: First recap

I hope that the Kaggle Santa challenge will come soon. My current idea is to share it with you while I try to win :D Last time my team and I achieved quite a good score but in the end it was hard to actually write down what we did and why and I really like the current series. Therefore I might share it on the go. The problem is that I'll not be able to get a good score if I publish my code... as others can simply copy it and improve but I think it sounds good as I really like to read other peoples code when they share it and quite some do (maybe not their best version).

Another problem is that I can't share it with my patrons earlier as I have to share it publicly or only with my team regarding the rules by Kaggle which obviously make sense.

If I'm able to write on this solver during that time I think it's okay and otherwise I'll pause my billing cycle in that time frame on Patreon.

Stay tuned ;)

And as always:

**Thanks for reading and special thanks to my first three patrons!**

List of patrons paying more than 2$ per month:

- Site Wang

If you want to have early access to the next posts as well and enjoy the blog in general please consider a donation via Patreon to keep this project running.

If you have any questions, comments or improvements on the way I code or write please comment here and or write me a mail o.kroeger@opensourc.es and for code feel free to open a PR on GitHub ConstraintSolver.jl

Additionally if you want to practice code reviews or something like that I would be more than happy to send you what I have before I publish. ;)

I'll keep you updated if there are any news on this on Twitter OpenSourcES as well as my more personal one: Twitter Wikunia_de

Become a Patron! |