Boolean constraints

Creation date: 2021-02-14

Tags: julia, constraint-programming, jump

I'm excited to announce that there are some new constraint types for the ConstraintSolver package.

If you're new to this blog: Welcome! This is my series about the ConstraintSolver package. If you want to start at the beginning please have a look at post #1. This post is mostly on extending JuMP so feel free to just continue reading with this post and checkout the rest in the sidebar on the left.

In this post I would like to show you the new constraint types that are supported in v0.6.4 as well as the background changes of it.

\[ 2x + 3x = 5 \]

Possible new constraints

In certain cases it was a bit of a pain to write some formulations and for some it still is, but it's getting better 😉 .

As an example in the four islands problem which is part of a list of problems modelled by Håkan Kjellerstrand with JuMP and to be solved with ConstraintSolver, one needed to write things like:

A,B,C,D = 1:4
@variable(m, A <= Pwana <= D, Int)
@variable(m, A <= KoalaPreserve <= D, Int)
@variable(m, b1[1:3], Bin)
@constraint(m, b1[1] := {Pwana == A})
@constraint(m, b1[2] := {KoalaPreserve == C})
@constraint(m, b1[3] := {b1[1] + b1[2] == 2})

@variable(m, b2[1:3], Bin)
@constraint(m, b2[1] := {Pwana == B})
@constraint(m, b2[2] := {KoalaPreserve == D})
@constraint(m, b2[3] := {b2[1] + b2[2] == 2})

@variable(m, b, Bin)
@constraint(m, b := {b1[3] + b2[3] == 1})
@constraint(m, b == 1)

for something as simple as:

@constraint(m, (Pwana == A && KoalaPreserve == C) || 
                (Pwana == B && KoalaPreserve == D)

In other words one can now write it as one normally would inside of thinking about it 😄 and additionally one doesn't need seven extra variables which slows down the search.

Internal changes

There are a couple of internal changes that needed to be made for this to be supported.

First of all the modeling layer JuMP needs to support && and || which normally isn't the case as it's more designed for linear and non linear solvers.

JuMP changes

The first hint on what one needs to implement is basically given when one just tries out the constraint and looks which function throws the error.

In our case:

julia> @constraint(m, Pwana == A && KoalaPreserve == C)
ERROR: LoadError: At REPL[6]:1: `@constraint(m, Pwana == A && KoalaPreserve == C)`: Constraints must be in one of the following forms:
       expr1 <= expr2
       expr1 >= expr2
       expr1 == expr2
       lb <= expr <= ub
 [1] error(::String, ::String)
   @ Base ./error.jl:42
 [2] _macro_error(macroname::Symbol, args::Vector{Any}, source::LineNumberNode, str::String)
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:924
 [3] (::JuMP.var"#_error#73"{Symbol, LineNumberNode})(str::String)
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:390
 [4] _unknown_constraint_expr(_error::JuMP.var"#_error#73"{Symbol, LineNumberNode})
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:246
 [5] parse_constraint_head(::Function, ::Val{:&&}, ::Expr, ::Expr)
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:252
 [6] parse_constraint_expr
   @ ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:196 [inlined]
 [7] _constraint_macro(args::Tuple{Symbol, Expr}, macro_name::Symbol, parsefun::typeof(parse_constraint_expr), source::LineNumberNode)
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:431
 [8] var"@constraint"(__source__::LineNumberNode, __module__::Module, args::Vararg{Any, N} where N)
   @ JuMP ~/.julia/packages/JuMP/e0Uc2/src/macros.jl:514
in expression starting at REPL[6]:1

The important line here is:

[5] parse_constraint_head(::Function, ::Val{:&&}, ::Expr, ::Expr)

You might be interested in my reified constraint post where I extend JuMP such that reified constraints of the form b := { ... } are supported which was more or less copy paste from the indicator constraint that JuMP supports natively.

This time it's relatively similar we just want to form x == 1 && y == 1 into the form [x,y] in AndSet(EqualTo(1), EqualTo(1)) or something like that.

One problem that I didn't have before is that we need to know how many variables there are for the first part and for the second part to distinguish:

x + y == 1 && z == 1


x == 1 && y + z == 1

as an example. Here JuMP helps out a bit as it has a VectorAffineFunction which can split it into two ScalarAffineFunctions x+y and z or x and y+z respectively.

Then one just needs to additionally save the dimension of each of the constraints as [Pwana, KoalaPreserve] in CS.AllDifferentSet() has dimension two whereas ScalarAffineFunction has a dimension of 1.

If you're interested you can follow the implementation of the first function to parse the && here:

function JuMP.parse_constraint_head(_error::Function, ::Val{:(&&)}, lhs, rhs)

The more interesting part in my opinion is the function it calls:

function _build_and_constraint(
    lhs_set = JuMP.moi_set(lhs)
    rhs_set = JuMP.moi_set(rhs)

    lhs_jump_func = JuMP.jump_function(lhs)
    rhs_jump_func = JuMP.jump_function(rhs)

    lhs_func = JuMP.moi_function(lhs)
    rhs_func = JuMP.moi_function(rhs)

    func = [lhs_jump_func..., rhs_jump_func...]
    return JuMP.VectorConstraint(
        func, AndSet{typeof(lhs_func), typeof(rhs_func)}(lhs_set, rhs_set)

It gets the lhs and rhs expression and gets the function of it as well as the set.

Each constraint is of the form variables in Set where variables can be a simple vector of variables as in x in CS.AllDifferentSet() or a scalar function as in x+y <= 2 and set is CS.AllDifferentSet() or LessThan(2.0) respectively.

Two more things that are important to note here:

The last point is very important to notice as it would be quite natural to write:

AndSet(lhs_func, rhs_func, lhs_set, rhs_set)

The problem is that every variable has an id and MOI (the layer below JuMP) does have its own order of variables first and then potentially reorders them when telling the solver about the model. MOI doesn't change the set though so the order there would be left untouched which would result in saving the wrong information in our AndSet which we of course want to avoid.

In the constructor of the AndSet I take the dimension of the sets and save it in the struct such that we can later get the correct function part out of the potential mess that can be created in this line: [lhs_jump_func..., rhs_jump_func...].

Before we continue with implementing the constraint itself besides parsing it we probably want to add support for || as well. As both of these constraints are quite similar I thought it makes sense to have a BoolSet abstract type:

abstract type BoolSet{
} <: MOI.AbstractVectorSet end

and then:

struct AndSet{F1,F2,F1dim,F2dim,S1,S2} <: BoolSet{F1,F2,F1dim,F2dim,S1,S2}

and the same for OrSet.

I'll clarify the need for all the parametric types in a later section 😉 and will also explain them in some detail later. For now you can just read it as AndSet <: BoolSet where each AndSet struct type itself saves information of the types of the fields it contains. Much more on that later.

There is a big other additional step that we need to add later that transforms constraints like:

@constraint(m, x >= 1 && y > 1)


@constraint(m, -x <= -1 && -y < -1)

but that was quite a hassle to do so let's continue with an easier part first. I'll talk about this in another post if you're interested. Please let me know in the comment section!

Implementing the Constraint Functions

There are a couple of functions that need to be added when adding a new constraint and I should one day list them in the ConstraintSolver documentation such that YOU can potentially easily add a new constraint to the solver.

For now let me mention the steps taken here but maybe in a more structured order than usual.

  1. Let JuMP know that we want to support a new constraint type

  2. Add the constraint to the solver

  3. Implement all necessary functions that are needed for interacting with the constraint

Support a new constraint type

To tell JuMP / MOI that we support a new constraint we need to specify the MOI.supports_constraint method with the function and set that we want to support

function MOI.supports_constraint(
) where {T,F1,F2,F1dim,F2dim,S1,S2,BS<:CS.BoolSet{F1,F2,F1dim,F2dim,S1,S2}}

The function returns true if we support that constraint or false in our case it depends on the inner constraints. We only support X && Y when we support the constraints X and Y. We can therefore call MOI.supports_constraint again on the inner constraints.

This is one reason why we need to have the parametric types as the constraint itself isn't part of the input of this function. This can be seen by the Type{ in the parameters.

Once that is done we need to add the constraint itself to the inner solver model.

Add the constraint to the solver

function MOI.add_constraint(
) where {T,BS<:BoolSet}
    com = model.inner
    internals = create_interals(com, func, set)
    constraint = init_constraint_struct(set, internals)
    add_constraint!(model, constraint)

    return MOI.ConstraintIndex{VAF{T},typeof(set)}(length(com.constraints))

Here we get the actual function and set.

The init_constraint_struct is the next thing that we need to implement but we're now done with the part we needed to change in MOI_wrapper/constraints.jl. We can now continue with stuff that's only relevant for the ConstraintSolver and are not part of the JuMP / MOI layer.

Before we do that lets go quickly over the few lines of code here:

Great that's about it. Now let's create a file in src/constraints called and.jl or something that describes our new constraint.

Implement all the constraint functions

We now have to initialize the new constraint type (yeah we also have to add the struct...).

In this special case I've created a boolset.jl as well which handles the stuff that's the same for And and Or but lets not get too distracted 😄

function init_constraint_struct(set::BoolSet{F1,F2}, internals) where {F1,F2}
    f = MOIU.eachscalar(internals.fct)

    lhs_fct = f[1:set.lhs_dimension]
    rhs_fct = f[end-set.rhs_dimension+1:end]

    if F1 <: MOI.ScalarAffineFunction
        lhs_fct = get_saf(lhs_fct)
    if F2 <: MOI.ScalarAffineFunction
        rhs_fct = get_saf(rhs_fct)

    if F1 <: MOI.VectorOfVariables
        lhs_fct = get_vov(lhs_fct)
    if F2 <: MOI.VectorOfVariables
        rhs_fct = get_vov(rhs_fct)

    lhs = get_constraint(lhs_fct, set.lhs_set)
    rhs = get_constraint(rhs_fct, set.rhs_set)

    return bool_constraint(set, internals, lhs, rhs)

This might look a bit strange at first and maybe also later but let's unpack it. One important thing to notice is that we are now in ConstraintSolver land such that we can do whatever we want and we don't need to communicate with JuMP and MOI for a while. At least we don't have to care about mixing up variable indices or something like that anymore.

In this stage we want to fill the type AndConstraint with some information:

abstract type BoolConstraint{C1<:Constraint,C2<:Constraint} <: Constraint end

struct AndConstraint{C1,C2} <: BoolConstraint{C1,C2}

I know we work with parametric types for a while now. Here it's is quite simple though such that it might be good to explain them here a bit if you aren't familiar with them yet. First of all I'm glad you continued reading!

The absrtact type in front of BoolConstraint means that this is some kind of type that doesn't contain information itself but can help grouping structs together. As an example Constraint is the abstract type which is the super type of all of my constraints such that I can dispatch on ::Constraint when I want or use as a type to catch dispatching errors early on.

Now BoolConstraint is a constraint itself and it saves the type of the two inner constraints in the type itself. This means one can have:

BoolConstraint{AllDifferentConstraint, LinearConstraint{Float64}}

which means that the left part of the && is an AllDifferentConstraint and the right part a normal linear constraint with coefficients of type Float64.

It is a good idea to have the concrete type in the field itself for performance reasons so in general

struct XYZ
    x :: Number

is slower than

struct XYZ{T <: Number}
    x :: T

because a Number isn't specified on how many bytes it need for example Float32 vs Float64 but with using parametric types this issue can be resolved.

Alright that's about it for this AndConstraint struct. In general one just needs to have std::ConstraintInternals as a field and can do whatever with the rest.

Now I think we want to have a look at this again 😉

function init_constraint_struct(set::BoolSet{F1,F2}, internals) where {F1,F2}
    f = MOIU.eachscalar(internals.fct)

    lhs_fct = f[1:set.lhs_dimension]
    rhs_fct = f[end-set.rhs_dimension+1:end]

    if F1 <: MOI.ScalarAffineFunction
        lhs_fct = get_saf(lhs_fct)
    if F2 <: MOI.ScalarAffineFunction
        rhs_fct = get_saf(rhs_fct)

    if F1 <: MOI.VectorOfVariables
        lhs_fct = get_vov(lhs_fct)
    if F2 <: MOI.VectorOfVariables
        rhs_fct = get_vov(rhs_fct)

    lhs = get_constraint(lhs_fct, set.lhs_set)
    rhs = get_constraint(rhs_fct, set.rhs_set)

    return bool_constraint(set, internals, lhs, rhs)

The first line splits up the function that we previously saved in the ConstraintInternals into its pieces. This breaks up the VectorAffineFunction into ScalarAffineFunctions but is not creating a vector of ScalarAffineFunctions directly. It is a bit more complicated internally such that indexing the new struct results in VectorAffineFunction. The indexing itself is done by the dimension information that we have saved in the set.

The next couple of lines are needed to convert the VectorAffineFunction into the function that is expected by the solver. You might remember that we did this in the parametric type information. This way we can use F1 <: MOI.ScalarAffineFunction and then get the scalar affine function from it.

You can have a look at my implementation of all those minor functions like get_saf if interested.

We do the same for the rhs and then get the constraint of the lhs and rhs and return the AndConstraint or OrConstraint depending on the set we started with.

As an example I have:

function bool_constraint(::AndSet, internals, lhs, rhs)

This is the last thing we needed to do before we can actually handle the logic behind it.

Which functions are needed and which are optional?

There are certain functions that needed to be implemented by every new constraint and some that aren't necessary for some of them.

Every of those functions takes at least the arguments constraint, fct and set which are used for dispatching. Some of them take additional arguments such as the model or others. They will be discussed for each method indivually.

We start with init_constraint!. This function is called once at the beginning to initialize the constraint itself and here one might want to set some fields of the constraint type. This function doesn't need to be implemented but in our case we have a somehow meta constraint such that it should call the init_constraint! function of the underlying constraints if they implement this method.

function init_constraint!(
    active = true,
    set_impl_functions!(com, constraint.lhs)
    set_impl_functions!(com, constraint.rhs)
    if constraint.lhs.impl.init   
        !init_constraint!(com, constraint.lhs, constraint.lhs.fct, constraint.lhs.set; active=active) && return false
    if constraint.rhs.impl.init   
        !init_constraint!(com, constraint.rhs, constraint.rhs.fct, constraint.rhs.set; active=active) && return false
    return true

In this case we additionally have the argument com which is the ConstraintSolverModel and the keyword argument active. This is quite special as the init_constraint function returns whether the constraint is still feasible after initialization. This can be the case for constraints like

@constraint(m, 0 >= 1)

which somehow were created by things like x-x >= 1 or something like that. That constraint will never be called besides initializing it so we want to return infeasibility as soon as possible.

The problem here is that the constraint can be inside an indicator or reified constraint. Like

@constraint(m, b := {x-x >= 1})

In this case we still want to return infeasibility from the inner constraint to set it b = 0 as a consequence. Nevertheless constraints like

@variable(m, 0 <= x[1:2] <= 10, Int)
@constraint(m, b := { x in TableSet([
    1 2;
    3 4;

shell not remove the impossible values for x as it would outside the reified constraint even though it's still feasible. This is the reason for the active keyword.

In our case the init_constraint! function sets the implemented functions for the inner constraints such as: Does the inner constraint itself as an init_constraint! function?

If it is the case it calls the init constraint function and returns whether both of them are feasible.

For the next functions I only want to give an overview of what they do in general here and you can check them out if interested. They should be quite easy to understand.

function is_constraint_solved(

This function checks whether the constraint is fulfilled based on the given values.

function is_constraint_violated(

checks whether the constraint is already violated.

function still_feasible(

Checks whether setting the variable with id vidx to value violates the constraint or whether it is still feasible afterwards.

And normally the most important one:

function prune_constraint!(
    logs = true,

This function prunes the search space and is called whenever a variable which is part of this constraint changed its possible values.

All of these methods need to be implement to even though some of them might not be called for simple models and are only needed when the function is inside an indicator or reified constraint.

There are additionally some more functions which are only needed if one needs to revert changes that are specific to the constraint. I have discussed some of those in the TableConstraint post.


Very similar functions needed to be implemented for || and I might add support for \xor in the future. As we allow any kind of inner constraint it's possible to combine boolean constraints easily as we have shown at the beginning of this post without changing any of the underlying code.

Next up

I'll continue working on element constraints next and might add some more syntactical sugar. If you enjoyed this post and the solver please consider giving the project a GitHub star and maybe watch the repo.

There is also a ConstraintSolver stream in the JuliaLang zulip.

Thanks for reading and consider subscribing to support this blog and my projects!

I recently added the function to subscribe to my mailing list. Would love to see you here more often! Do you want to keep updated about twice a month?

Powered by Buttondown.

Special thanks to my 11 patrons!

Special special thanks to my >4$ patrons. The ones I thought couldn't be found 😄

For a donation of a single dollar per month you get early access to these posts. Your support will increase the time I can spend on working on this blog.

There is also a special tier if you want to get some help for your own project. You can checkout my mentoring post if you're interested in that and feel free to write me an E-mail if you have questions: o.kroeger <at>

I'll keep you updated on Twitter OpenSourcES.

Want to be updated? Consider subscribing and receiving a mail whenever a new post comes out.

Powered by Buttondown.

Subscribe to RSS