-
Notifications
You must be signed in to change notification settings - Fork 268
Flow cover cuts #2518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: latest
Are you sure you want to change the base?
Flow cover cuts #2518
Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## latest #2518 +/- ##
==========================================
+ Coverage 81.20% 81.35% +0.14%
==========================================
Files 349 349
Lines 85465 86440 +975
==========================================
+ Hits 69406 70327 +921
- Misses 16059 16113 +54 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
fwesselm
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Opt-Mucca , thank you for adding the flow cover cuts.
One general question (- I have not thought this through -): Should new cut separators like flow cover cuts live in the HighsCutGeneration class or have its own (derived) class?
highs/mip/HighsCutGeneration.cpp
Outdated
| // col is in N- \ C- | ||
| if (snfr.flowCoverStatus[i] == -1 && snfr.coef[i] == -1) { | ||
| assert(snfr.vubCoef[i] >= 0); | ||
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe add a const variable to store the hard-coded 1e-8?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added const double vubEpsilon = 1e-8;
| if (snfr.flowCoverStatus[i] == -1 && snfr.coef[i] == -1) { | ||
| assert(snfr.vubCoef[i] >= 0); | ||
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { | ||
| ld.m[ld.r] = snfr.vubCoef[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth adding a mini lambda to add elements to ld.m?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As in turn the entire loop into a lambda?
| if (vubcoef <= ld.M[i] - 1e-8) { | ||
| assert(ld.M[i] < vubcoefpluslambda); | ||
| alpha = 1; | ||
| beta = -i * HighsCDouble(snfr.lambda) + ld.M[i]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to use static_cast<HighsCDouble>(...) instead of C-style casting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this C-style casting? My IDE at least doesn't complain like it normally does for other conversions. It's rather a call to the constructor:
HighsCDouble(double val) : hi(val), lo(0.0) {} in HighsCDouble.h.
I'm happy to change it if this is still counted as bad practice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The C style of cast - even (type)foo, rather than type(foo) - is everywhere in HiGHS and has never caused an issue.
Hence the static_cast (which is excessively verbose) is not necessary. We have greater priorities
highs/mip/HighsCutGeneration.cpp
Outdated
| liftedcoef -= ld.M[i]; | ||
| return static_cast<double>(liftedcoef); | ||
| } | ||
| if (i < ld.r) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this actually an else if?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but as all the cases always return something there's no functional difference.
I've added the else to help with readability.
highs/mip/HighsCutGeneration.cpp
Outdated
| if (snfr.vubCoef[i] > snfr.lambda + 1e-8) { | ||
| if (snfr.origBinCols[i] != -1) { | ||
| // col is in L- | ||
| vals[rowlen] = -snfr.lambda; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: Add another mini-lambda to add to vals, inds, update rowlen and tmpRhs. Just wondering if this may shorten this loop...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would definitely shorten the loop. I should have added this earlier.... Good suggestion!
@fwesselm For the current design they shouldn't. Anything that is going to call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't comment on the technical details as they're way beyond me. However, it's the sort of stuff you understand well, and @fwesselm has been with you in the process.
My only observation is that
struct SNFRelaxation {
HighsInt numNnzs; // |N-| + |N+|
std::vector coef; // (+-1) coefficient of col in SNFR
std::vector vubCoef; // u_j in y'_j <= u_j x_j in SNFR
std::vector binSolval; // lp[x_j], y'_j <= u_j x_j in SNFR
std::vector origBinCols; // orig x_i, y'_j <= u_j x_j in SNFR
std::vector origContCols; // orig y_i used to make y'_j in SNFR
std::vector aggrBinCoef; // coef of x_i in y'_j aggregation
std::vector aggrContCoef; // coef of y_i in y'_j aggrregation
std::vector aggrConstant; // constant shift in y'_j aggregation
std::vector complementation;
with the HiGHS naming convention would be
struct SnfRelaxation {
HighsInt num_n_nz; // |N-| + |N+|
std::vector coef; // (+-1) coefficient of col in SNFR
std::vector vub_coef; // u_j in y'_j <= u_j x_j in SNFR
std::vector bin_sol_val; // lp[x_j], y'_j <= u_j x_j in SNFR
std::vector orig_bin_cols; // orig x_i, y'_j <= u_j x_j in SNFR
std::vector orig_cont_cols; // orig y_i used to make y'_j in SNFR
std::vector aggr_bin_coef; // coef of x_i in y'_j aggregation
std::vector aggr_cont_coef; // coef of y_i in y'_j aggrregation
std::vector aggr_constant; // constant shift in y'_j aggregation
std::vector complementation;
Don't fee you need to change it, though. I also acknowledge that Leona wasn't great at following the naming convention....
|
@jajhall Currently the big hurdle is getting this to be performance improving on @fwesselm cluster. (Last time it was measured it was performance neutral). |
|
Hmm... Can you easily switch off the flow cover constraints by default? That way the code is merged, possibly avoiding conflicts that might occur if merged later. Probably a useless comment, but I have a memory of Philip Christophel saying that flow cover constraints weren't so useful until he revisited them... |
|
I can add a parameter to turn them off before they get merged. In the long term I'd like to add some high level cut parameters anyways (something like an emphasis setting) because it's clear some users want to test settings for their application. Currently I'm just trying to make it so the behaviour of when they're turned off is identical to before. |
This PR adds flow cover cuts to HiGHS. All logic related to the cut can be followed from the
HighsCutGeneration::tryGenerateFLowCoverCutfunction. Currently the cuts are only generated based on rows in the LP, i.e., "aggregations" of size 1.The performance results are extremely finicky, but seem to be promising, and are especially so for some network based energy problems I've been testing on. This shouldn't be merged before any additional computational experiments are done.
@galabovaa I changed the two tests because (1) the instance now solved at the root node and therefore has the optimal solution but has status
interrupted(2) the primal heuristics now jumped quicker to the optimal solution, so I had to up the objective limit to still catch the event.