Creation date: 2020-01-19
This is kind of part 16 of the series about: How to build a Constraint Programming solver? Even if you are not deep into constraint programming this post tries to show you what kind of problems occur when you deal with floating point numbers. If you are new to this blog and are interested in constraint programming, there are several posts already waiting for you:
I'm a student and do this blog during my free time but would like to spend more and more time programming this project as well as blogging about it. If you have a very generous day you might want to go over the top and finance a bit of my dream :) You can do so by becoming a patron. If you do I will create a better/faster blog experience and you will get posts like this two days in advance! Everything will be free forever but if you have some cash to spare -> This might be an option ;)
Special limited offer only for X days... I either just lost your attention or I have it now :D I decided to offer a 1:1 (binary) talk where you can ask me about my work and just chat with me if you're willing to join the Patron 4$ club. ;) This offer is until the 10th of February.
If you want to just keep reading -> Here you go ;)
After merging the JuMP part, Mathieu Besançon had a deeper look into the codebase and found out that I don't support real number constraints like:
\[ 0.2x+0.1y = 0.3z \]At first I thought: Oh yeah I just have to change the type of some structs to support it. Well I forgot the problems using floating point numbers...
You might have seen it before
\[ 0.2+0.1 \neq 0.3 \]Yes we are living in a new decade but we still need to deal with the problems that computers can't calculate even though it's kind of the thing everyone thinks they are much better than we humans...
There are way better posts and explanations out there why computers don't sum up \(0.2+0.1\) to \(0.3\) but instead to something like \(0.30000000000000004\).
Have a look at StackOverflow for an answer on this question or search for "How floating point numbers work" on Google -> LMGTFY.
It might be obvious how this applies to the problem in this constraint solver project but maybe a short look back on how these equality constraints are tackled is helpful:
\[ 0.2x+0.1y = 0.3z \]We have \(1 \leq x,y,z \leq 3\) and as always they are integers. First we write that as:
\[ 0.2x+0.1y-0.3z = 0 \]Let's assume we have set \(x = y = 1\) during our backtracking process. Then we would like to get \(z=1\) but instead we get: "The problem is infeasible". Because
\[ 0.2+0.1-0.3 = 5.551115123125783e-17 \neq 0 \]If you want to read the full post on how the whole constraint works: Part 6: Sum constraint first part
Everything works without any problems for integers as computers are actually very good at adding discrete numbers. Wow good job computer!
Now there are different solutions to solve the problem above. We could either say:
Well we just don't allow real coefficients. Which I did before :D
We can convert everything to integers and be happy.
Okay the result is close so it's probably just the case that I miscalculated the result -> Maybe if it doesn't add up to 0 it's still okay.
The first one is not really a solution so we should have a look at option 2. The above example can be easily converted to:
\[ 2x+1y = 3z \]If I have \(0.0000002x+0.0000001y=0.0000001z\) we can convert to the same thing but how about: \(0.0000002x+0.1000001y=0.1000001z\)? Clearly the result stays \(x=y=z=1\) but that is not obvious to a computer. We can multiply everything by 10 million to get: \(2x+1000001 = 3000001z\) and the numbers might get bigger and we sum over it which brings us eventually to deal with overflow errors or we use something bigger than Int64
for example BigInt
. Which makes it slower so this is an interesting thing to do but it might be not the best choice and is certainly not the one used the most. Which brings us to option 3:
Let us just check whether it is close enough or not. Julia is of course prepared as I'm not the first guy who encountered this problem. Therefore there are functions like isapprox
:
>? isapprox
help?> isapprox
search: isapprox
isapprox(x, y; rtol::Real=atol>0 ? 0 : √eps, atol::Real=0, nans::Bool=false, norm::Function)
Inexact equality comparison: true if norm(x-y) <= max(atol, rtol*max(norm(x), norm(y))). The default atol is zero and the default rtol depends on the types of x and y. The
keyword argument nans determines whether or not NaN values are considered equal (defaults to false).
For real or complex floating-point values, if an atol > 0 is not specified, rtol defaults to the square root of eps of the type of x or y, whichever is bigger (least precise).
This corresponds to requiring equality of about half of the significand digits. Otherwise, e.g. for integer arguments or if an atol > 0 is supplied, rtol defaults to zero.
...
Examples
≡≡≡≡≡≡≡≡≡≡
julia> 0.1 ≈ (0.1 - 1e-10)
true
The example also shows an interesting unicode feature. You can use the mathematical approx sign \(\approx\) using \approx
in your editor for such comparisons. Which is a short notation for the default values used by isapprox
. Normally solvers give it as an option to define atol
and rtol
so these are options for the constraint solver now.
Let's have a look how this can be applied to our problem.
We normally remove values from the search space in the following way. Our slightly changed example:
\[ 0.2x+0.1y - 0.15z = 0 \]and we still have \(1 \leq x,y \leq 3\) but \(1 \leq z \leq 2\). We check which value is the highest we can reach so use the maximum value for positive coefficients and the lowest for negative coefficients:
\[ 0.2\cdot 3 + 0.1 \cdot 3 - 0.15 \cdot 1 = 0.7500000000000001 \]I use Julia now to compute this which shows that we have some floating point issues.
We would have a maximum value of \(0.7500000000000001\) and a minimum value of \(5.551115123125783e-17\) which means of course that we can't achieve our goal of \(0.0\).
Normally this gets removed immediately as there is no way to make it feasible. Therefore we need to change the point where we return false
in these cases a bit by using our absolute tolerance atol
:
# if the maximum is smaller than 0 (and not even near zero)
# or if the minimum is bigger than 0 (and not even near zero)
# the equation can't sum to 0 => infeasible
if full_max < -com.options.atol || full_min > +com.options.atol
com.bt_infeasible[indices] .+= 1
return false
end
Okay what are the next steps? We remove one value at a time from the equation and compute it again and gain more information about the bound of the current value. This is explained in more detail in previous posts.
If we want to change the bounds of a variable we need to be careful again. We had this before: See on GitHub
if maxs[i] < pre_maxs[i]
if constraint.coeffs[i] > 0
still_feasible = remove_above!(com, search_space[idx], fld(maxs[i], constraint.coeffs[i]))
else
still_feasible = remove_below!(com, search_space[idx], fld(maxs[i], constraint.coeffs[i]))
end
if !still_feasible
return false
end
end
or more specifically: remove_above!(com, search_space[idx], fld(maxs[i], constraint.coeffs[i]))
which means we found a new bound (including the coefficient) and divide by the coefficient and round down. This can result in something like:
julia> 0.3/0.1
2.9999999999999996
Which means we would have fld(0.3, 0.1) = 2.0
and would remove the value 3 but actually the value 3
is a possible value. Therefore we need to compute a safe upper bound which means if the result is close to the above integer we probably don't want to round down but instead round up.
This needs to be done on several steps throughout the constraint. I hope the idea behind this is clear.
That's all for this a bit smaller post. There are two PRs currently in the reviewing process with different things which will get their own post as well:
Better dispatch on functions:
Currently each constraint is not perfectly typed as the constraint function is of type ::Function
which might lead to worse performance. Additionally there exists a smoother way by using what is used for the MathOptInterface interface anyway.
Support for linear objectives: Currently we only support @objective(m, Min, x)
but not @objective(m, Min, 2x+4.1y)
.
After all changes and reviews I'll blog about those two and make a final post on the Kaggle Santa Challenge.
The next post on linear objectives is published: Have a look: Support Linear Objectives
If you enjoy the blog in general and at least some of the constraint solver posts I would be very flattered if you consider supporting this blog. There are several ways you can do that:
Star the project on GitHub
Follow me on Twitter
Spread the blog on twitter and mention me ;)
Or if you want to support it financially please consider a donation via Patreon, GitHub or old school PayPal
If you subscribe until the 10th of February 2020 to spend about 4$ per month: We will have a 1:1 call where you can ask me anything about the blog!
Thanks for reading and special thanks to my five patrons!
List of patrons paying more than 2$ per month:
Site Wang
Gurvesh Sanghera
Currently I get 10$ per Month using Patreon and PayPal when I reach my next goal of 50$ per month I'll create a faster blog which will also have a better layout:
Code which I refer to on the side
Code diffs for changes
Easier navigation with Series like Simplex and Constraint Programming
instead of having X constraint programming posts on the home page
And more
If you have ideas => Please contact me!
For a donation of a single dollar per month you get early access to the posts. Try it out at the start of a month and if you don't enjoy it just cancel your subscription before pay day (end of month). Check out Patreon!
Any questions, comments or improvements? Please comment here and/or write me an e-mail o.kroeger@opensourc.es and for code feel free to open a PR/issue on GitHub ConstraintSolver.jl
I'll keep you updated on Twitter OpenSourcES as well as my more personal one: Twitter Wikunia_de