Improvements to reliability branching#979
Conversation
…the cutoff when no incumbent was found.
… improve-reliable-branching # Conflicts: # cpp/src/branch_and_bound/pseudo_costs.cpp
…change estimate via dual simplex single pivot (NVIDIA#963).
…ange instead of the objective. fixed candidate ranking in reliability branching.
|
/ok to test 94c3dc3 |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdded pivot-based single-pivot objective-change estimation and LP transpose storage; integrated these into strong-branching and reliability-branching flows; introduced dual-simplex phase2 helper templates; and widened solver integer parameter ranges for branching modes. (50 words) Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 293-295: dual_phase2() can return dual::status_t::CUTOFF when
child_settings.cut_off is set, but the current helper treats any non-success as
failure and leaves obj as NaN; change the failure handling so that when
dual_phase2 returns CUTOFF you set obj to the applied cutoff
(child_settings.cut_off, i.e., upper_bound + settings.dual_tol) and treat it as
a valid result for seeding pseudo-costs instead of the generic failure path;
apply the same change to the other occurrence handling lines covering the same
logic (the block that currently inspects dual::status_t results and assigns
obj).
- Around line 898-906: Replace the std::cout/std::format usage in the block that
checks unreliabe_list.size() > max_num_candidates with the existing logger: call
log.printf() and pass a formatted message including
strong_branching_lp_iter.load(), branch_and_bound_lp_iters,
unreliable_list.size(), num_tasks, and reliable_threshold; remove the
std::format/include dependency and ensure the message format matches other
log.printf() calls in this file so logging is consistent with the parallel
solver pattern.
- Around line 219-245: The code currently passes basic_map[j] directly into
single_pivot_objective_change_estimate for each j in fractional; add a defensive
check on basic_map[j] before that call (e.g., if basic_map[j] < 0 then skip the
dual-pivot estimate for this j or assert/fail fast), so only valid basic indices
are forwarded; update both occurrences where fractional is iterated (the loop
using basic_map and the second occurrence mentioned) to guard around the call to
single_pivot_objective_change_estimate and avoid pushing an invalid basic index
into structures like e_k.i.
In `@cpp/src/dual_simplex/phase2.cpp`:
- Around line 175-206: The CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: eb905199-9239-4fa9-a7c5-55a6b8c381b6
📒 Files selected for processing (6)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/branch_and_bound/pseudo_costs.cppcpp/src/branch_and_bound/pseudo_costs.hppcpp/src/dual_simplex/phase2.cppcpp/src/dual_simplex/phase2.hppcpp/src/math_optimization/solver_settings.cu
| #ifdef CHECK_CHANGE_IN_REDUCED_COST | ||
| const i_t m = A_transpose.n; | ||
| const i_t n = A_transpose.m; | ||
| std::vector<f_t> delta_y_dense(m); | ||
| delta_y.to_dense(delta_y_dense); | ||
| std::vector<f_t> delta_z_check(n); | ||
| std::vector<i_t> delta_z_mark_check(n, 0); | ||
| std::vector<i_t> delta_z_indices_check; | ||
| phase2::compute_reduced_cost_update(lp, | ||
| basic_list, | ||
| nonbasic_list, | ||
| delta_y_dense, | ||
| leaving_index, | ||
| direction, | ||
| delta_z_mark_check, | ||
| delta_z_indices_check, | ||
| delta_z_check, | ||
| work_estimate); | ||
| f_t error_check = 0.0; | ||
| for (i_t k = 0; k < n; ++k) { | ||
| const f_t diff = std::abs(delta_z[k] - delta_z_check[k]); | ||
| if (diff > 1e-6) { | ||
| printf("delta_z error %d transpose %e no transpose %e diff %e\n", | ||
| k, | ||
| delta_z[k], | ||
| delta_z_check[k], | ||
| diff); | ||
| } | ||
| error_check = std::max(error_check, diff); | ||
| } | ||
| if (error_check > 1e-6) { printf("delta_z error %e\n", error_check); } | ||
| #endif |
There was a problem hiding this comment.
CHECK_CHANGE_IN_REDUCED_COST path is currently uncompilable after refactor.
At Line 183, the debug path uses phase2::compute_reduced_cost_update(...) and references lp, basic_list, and nonbasic_list, but those symbols are not in scope in this function. Enabling the macro will fail compilation.
Suggested fix (thread required context into compute_delta_z)
-template <typename i_t, typename f_t>
-void compute_delta_z(const csc_matrix_t<i_t, f_t>& A_transpose,
+template <typename i_t, typename f_t>
+void compute_delta_z(const lp_problem_t<i_t, f_t>& lp,
+ const std::vector<i_t>& basic_list,
+ const std::vector<i_t>& nonbasic_list,
+ const csc_matrix_t<i_t, f_t>& A_transpose,
const sparse_vector_t<i_t, f_t>& delta_y,
i_t leaving_index,
i_t direction,
@@
- phase2::compute_reduced_cost_update(lp,
- basic_list,
- nonbasic_list,
- delta_y_dense,
- leaving_index,
- direction,
- delta_z_mark_check,
- delta_z_indices_check,
- delta_z_check,
- work_estimate);
+ compute_reduced_cost_update(lp,
+ basic_list,
+ nonbasic_list,
+ delta_y_dense,
+ leaving_index,
+ direction,
+ delta_z_mark_check,
+ delta_z_indices_check,
+ delta_z_check,
+ work_estimate);- compute_delta_z(A_transpose,
+ compute_delta_z(lp,
+ basic_list,
+ nonbasic_list,
+ A_transpose,
delta_y_sparse,
leaving_index,
direction,
nonbasic_mark,
delta_z_mark,
delta_z_indices,
delta_z,
phase2_work_estimate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cpp/src/dual_simplex/phase2.cpp` around lines 175 - 206, The
CHECK_CHANGE_IN_REDUCED_COST debug block calls
phase2::compute_reduced_cost_update but references out-of-scope symbols (lp,
basic_list, nonbasic_list); to fix, thread the required context into
compute_delta_z by updating its signature (and all callers) to accept the LP and
basis lists (or pass appropriate local equivalents available in this translation
unit) and then forward those parameters into
phase2::compute_reduced_cost_update, or alternatively construct/obtain the
needed lp, basic_list and nonbasic_list inside compute_delta_z before the debug
call so the debug path compiles; ensure you update function names
compute_delta_z and the call site(s) consistently.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cpp/src/branch_and_bound/pseudo_costs.cpp`:
- Around line 420-431: The cutoff sum in the else branch of pseudo_costs.cpp may
multiply objective coefficients by infinite bounds; update the computation in
the block that sets child_settings.cut_off to check
std::isfinite(child_problem.upper[i]) and std::isfinite(child_problem.lower[i])
before doing the multiplication with original_lp.objective[i]; skip terms with
non-finite bounds, and if a skipped term has a non-zero objective coefficient
treat the cutoff as unbounded (set child_settings.cut_off to +/-INFINITY based
on the sign of original_lp.objective[i]) or otherwise keep the finite
accumulated sum, ensuring no multiplication by non-finite values occurs (refer
to child_settings.cut_off, original_lp.objective, and
child_problem.upper/lower).
- Around line 140-161: The debug block wrongly accesses
delta_y.lp_solution.x[k]; delta_y is a sparse_vector_t with arrays .i and .x, so
replace delta_y.lp_solution.x[k] with delta_y.x[k] (keeping the loop over k <
delta_y.i.size()) so the index/value pairs are read correctly; leave the rest of
the CHECK_DELTA_OBJ computation (uses delta_z_indices, vstatus, delta_z,
variable_j, delta_obj_check, step_length) unchanged so the validity check
compiles and compares delta_obj_check to delta_obj as intended.
- Around line 121-131: The debug block references an undefined variable y
causing a compile error; change the code to use the solver's dual vector (e.g.
lp_solution.y) or declare a local alias before use. Specifically, in the
CHECK_DUAL_FEASIBILITY block update the call to matrix_transpose_vector_multiply
to pass lp_solution.y (or create vector<f_t> y = lp_solution.y) instead of the
undefined y so dual_residual, matrix_transpose_vector_multiply, vector_norm_inf
and settings.log.printf work correctly.
- Around line 296-308: The fallback cutoff calculation when upper_bound is not
finite can produce +/-inf by multiplying objective coefficients with infinite
variable bounds; update the block that sets child_settings.cut_off (the else
branch using original_lp.objective, child_problem.upper and child_problem.lower)
to ignore terms where the corresponding child_problem.upper or
child_problem.lower is not finite (or only compute the fallback when all
contributing bounds are finite), summing only finite products and leaving
child_settings.cut_off at a safe default (or signaling no cutoff) if any
required bound is infinite; ensure the logic preserves the existing behavior
when bounds are finite and still adds settings.dual_tol when applicable.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7905b856-7959-4fe1-bae4-678afac404e8
📒 Files selected for processing (1)
cpp/src/branch_and_bound/pseudo_costs.cpp
|
/ok to test 703f30c |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)
1992-1998: Consider documenting the magic value-2for reliability branching mode.The value
-2forsettings_.reliability_branchingenables dual-simplex objective estimation, but this isn't documented in the code. Other code paths checkreliability_branching != 0for general reliability branching (line 858), making the semantics of different values unclear.Consider adding a comment or using a named constant to clarify the different modes:
0: disabled-2: enabled with dual pivot estimation- Other non-zero values: enabled without dual pivot estimation (?)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 1992 - 1998, The branch sets behavior when settings_.reliability_branching == -2 but that magic value is undocumented; replace the literal with a named constant or enum (e.g., RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes (0 = disabled, -2 = dual-pivot objective estimation, other non-zero = reliability branching without dual pivot). Update the check in this block that sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any other checks that compare settings_.reliability_branching (e.g., places using "reliability_branching != 0") to use the new named constant or enum variants so the semantics are clear throughout the codebase.
792-810: Debug output should use logger or be removed before merge.This
std::coutwithstd::formatbypasses the existing logging infrastructure (settings_.log) which handles log prefixes, file redirection, and verbosity control. Given the PR is marked "[WIP]", this appears to be temporary diagnostic code.Consider:
- Removing this output before merging, or
- Converting to use
settings_.log.printf()if the output is intended to be permanent, or- Wrapping in a
#ifdefdebug macro if needed for development only♻️ Suggested conversion to logger (if keeping)
- std::cout << std::format( - "{}: user_obj={:.3g}, solver_obj={:.3g}, user_lower={:.3g}, " - "solver_lower={:.3g}, user_gap={:.3g}, " - "solver_gap={:.3g}, tol={:.3g}", - feasible_solution_symbol(thread_type), - user_upper, - upper, - user_lower, - lower, - std::abs(user_upper - user_lower), - std::abs(upper - lower), - settings_.absolute_mip_gap_tol) - << std::endl; + settings_.log.debug( + "%c: user_obj=%.3g, solver_obj=%.3g, user_lower=%.3g, " + "solver_lower=%.3g, user_gap=%.3g, solver_gap=%.3g, tol=%.3g\n", + feasible_solution_symbol(thread_type), + user_upper, + upper, + user_lower, + lower, + std::abs(user_upper - user_lower), + std::abs(upper - lower), + settings_.absolute_mip_gap_tol);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 792 - 810, The debug std::cout/std::format block should be removed or converted to use the project's logging API; replace the print in branch_and_bound.cpp (the block that reads upper_bound_, get_lower_bound(), compute_user_objective(original_lp_, ...), feasible_solution_symbol(...), settings_.absolute_mip_gap_tol) with a call to settings_.log.printf(...) (or remove entirely) so logs respect prefixes, redirection and verbosity; if this is for temporary debugging only, wrap it in a debug macro instead of using std::cout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1992-1998: The branch sets behavior when
settings_.reliability_branching == -2 but that magic value is undocumented;
replace the literal with a named constant or enum (e.g.,
RELIABILITY_BRANCHING_DUAL_PIVOT) and add a short comment describing the modes
(0 = disabled, -2 = dual-pivot objective estimation, other non-zero =
reliability branching without dual pivot). Update the check in this block that
sets pc_.reliability_branching_settings.rank_candidates_with_dual_pivot and any
other checks that compare settings_.reliability_branching (e.g., places using
"reliability_branching != 0") to use the new named constant or enum variants so
the semantics are clear throughout the codebase.
- Around line 792-810: The debug std::cout/std::format block should be removed
or converted to use the project's logging API; replace the print in
branch_and_bound.cpp (the block that reads upper_bound_, get_lower_bound(),
compute_user_objective(original_lp_, ...), feasible_solution_symbol(...),
settings_.absolute_mip_gap_tol) with a call to settings_.log.printf(...) (or
remove entirely) so logs respect prefixes, redirection and verbosity; if this is
for temporary debugging only, wrap it in a debug macro instead of using
std::cout.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 91d703fa-65d3-4ee5-8b05-99dbef2b49dd
📒 Files selected for processing (2)
cpp/src/branch_and_bound/branch_and_bound.cppcpp/src/dual_simplex/phase2.cpp
🚧 Files skipped from review as they are similar to previous changes (1)
- cpp/src/dual_simplex/phase2.cpp
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test 9c9d019 |
|
/ok to test 5e16269 |
chris-maes
left a comment
There was a problem hiding this comment.
Thanks for the PR Nicolas!
Please double check the upper bound on the objective, it looks incorrect.
Also, let's introduce a new setting so we don't conflict with @Kh4ster 's PR.
I think it's also possible to factor out some of the duplicate code here.
…g branching as a setting Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test c4f246e |
…n in strong branching Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test 9ea1248 |
…istic_test Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
|
/ok to test cd852b5 |
| #define CUOPT_MIP_CUT_CHANGE_THRESHOLD "mip_cut_change_threshold" | ||
| #define CUOPT_MIP_CUT_MIN_ORTHOGONALITY "mip_cut_min_orthogonality" | ||
| #define CUOPT_MIP_BATCH_PDLP_STRONG_BRANCHING "mip_batch_pdlp_strong_branching" | ||
| #define CUOPT_MIP_STRONG_BRANCHING_SIMPLEX_ITER_LIMIT "mip_strong_branching_simplex_iter_limit" |
There was a problem hiding this comment.
This is minor, but it becomes part of the API, so probably worth getting it correct. All other parameters use full word rather than abbreviations.
So we should probably call this mip_strong_branching_simplex_iteration_limit
|
|
||
| i_t simplex_iteration_limit = settings.strong_branching_simplex_iteration_limit; | ||
|
|
||
| if (simplex_iteration_limit < 1) { |
There was a problem hiding this comment.
If I understand correctly, you've set the default of strong_branching_simplex_iteration_limit = -1. Does that mean by default we will use initialize_psuedo_costs_with_estimates? Or maybe somewhere else in the code you catch this?
There was a problem hiding this comment.
Ah got it. You set it to 200 in solver.cu if it was set to -1.
| context.settings.mip_batch_pdlp_strong_branching; | ||
|
|
||
| branch_and_bound_settings.strong_branching_simplex_iteration_limit = | ||
| context.settings.strong_branching_simplex_iteration_limit < 0 |
There was a problem hiding this comment.
Probably this should be <= 0. Since if the user sets it to zero now, we still need to initialize the pseudocosts for the code to work.
There was a problem hiding this comment.
If the user set to 0 here, we use the objective estimate via dual simplex pivot.
chris-maes
left a comment
There was a problem hiding this comment.
Thanks so much Nicolas.
I'm removing 'request changes', but please make the following changes before merging
CUOPT_MIP_STRONG_BRANCHING_SIMPLEX_ITER_LIMIT -> CUOPT_MIP_STRONG_BRANCHING_SIMPLEX_ITERATION_LIMIT- Set iteration limit to 200 if user sets value <= 0. So -1, and 0 both default to 200.
Otherwise LGTM. Excited to see the improvement in reliability branching.
We can use a single dual simplex pivot to estimate the objective change after branch up or down in a variable (#963), which can be later be used for ranking the unreliable variables in reliability branching. To minimize the cost of reliability branching, we only apply trial branching to the 100 best variables. Other variables are not considered. Previously, the 100 unreliable variables were selected at random, which may not lead to best branching decision. In this PR, we also apply a cutoff based on the objective coefficients when no incumbent is available.
MIPLIB Benchmark (10 minute runs) on a GH200 system:
Does not include
graphdraw-domainandcbsctaChecklist