Skip to content

Reduced cost strenghtening#956

Open
nguidotti wants to merge 29 commits intoNVIDIA:release/26.04from
nguidotti:reduced-cost-fixing
Open

Reduced cost strenghtening#956
nguidotti wants to merge 29 commits intoNVIDIA:release/26.04from
nguidotti:reduced-cost-fixing

Conversation

@nguidotti
Copy link
Copy Markdown
Contributor

@nguidotti nguidotti commented Mar 13, 2026

In this PR, we apply reduced cost strengthening to the root node every time the incumbent is updated. This may improve the variable bounds and fix variables to their bounds globally.

Additionally,

  • Renamed the files related with the B&B workers to better suit the new file structure.
  • Absolute gap is now calculated in the user space instead of the solver space.

Pending: MIPLIB benchmark and testing

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

@nguidotti nguidotti added this to the 26.04 milestone Mar 13, 2026
@nguidotti nguidotti self-assigned this Mar 13, 2026
@nguidotti nguidotti requested a review from a team as a code owner March 13, 2026 14:10
@nguidotti nguidotti added the non-breaking Introduces a non-breaking change label Mar 13, 2026
@nguidotti nguidotti requested review from Kh4ster and aliceb-nv March 13, 2026 14:10
@nguidotti nguidotti added do not merge Do not merge if this flag is set improvement Improves an existing functionality mip labels Mar 13, 2026
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Mar 13, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@nguidotti
Copy link
Copy Markdown
Contributor Author

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 13, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Moved reduced-cost fixing into a root-centric flow (root_reduced_cost_fixing + presolve), added per-worker worker and worker-pool types for start-bound coordination and broadcasting root bound updates, and changed node APIs to accept start bounds and return feasibility while updating call sites accordingly.

Changes

Cohort / File(s) Summary
Branch-and-bound core
cpp/src/branch_and_bound/branch_and_bound.cpp, cpp/src/branch_and_bound/branch_and_bound.hpp
Removed find_reduced_cost_fixings; added root_reduced_cost_fixing(f_t) and switched root tightening to call reduced-cost fixing + presolve under mutex_original_lp_. Updated solution paths to invoke root fixing and mark numerical infeasibility when applicable.
Worker pool & workers
cpp/src/branch_and_bound/worker_pool.hpp, cpp/src/branch_and_bound/worker.hpp
Added branch_and_bound_worker_pool_t and branch_and_bound_worker_t with per-worker state (start_lower, start_upper, start_bounds_updated, strategy flags). Implemented init/diving APIs, idle-worker management, lower-bound aggregation, and root-bound broadcast.
Node bounds & API
cpp/src/branch_and_bound/mip_node.hpp, cpp/src/branch_and_bound/mip_node.cpp
Reworked node bound APIs: get_variable_bounds now takes start_lower/start_upper and returns bool feasibility; update_branched_variable_bounds returns bool; added is_infeasible/check_variable_bounds. Removed standalone inactive_status impl from cpp file (now inline).
Deterministic/diving workers & callers
cpp/src/branch_and_bound/deterministic_workers.hpp, cpp/src/branch_and_bound/...
Call sites updated to pass original_lp start bounds into node bound queries, added feasibility checks in enqueue/init paths, added start_bounds_updated handling in solve_node_lp and diving loops, and adjusted worker init synchronization.
Reduced-cost fixing helper
cpp/src/branch_and_bound/reduced_cost_fixing.hpp
New reduced_cost_fixing(...) implementation computing tightened integer bounds from reduced costs, updating bounds_changed, and returning (num_fixed, num_improved).
Diving heuristics & solver settings
cpp/src/mip_heuristics/..., cpp/src/dual_simplex/simplex_solver_settings.hpp, cpp/src/mip_heuristics/solver.cu
Set explicit reduced_cost_strengthening values for sub-MIP/RINS and normalized -1 sentinel to stronger default (3). Expanded comments for reduced_cost_strengthening.
Includes & build
cpp/src/branch_and_bound/pseudo_costs.hpp, cpp/src/branch_and_bound/..., cpp/src/branch_and_bound/CMakeLists.txt
Adjusted includes to use new worker.hpp/worker_pool.hpp, added reduced_cost_fixing.hpp/diving headers, removed unused includes, and removed mip_node.cpp from CMake sources.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.26% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Reduced cost strenghtening' directly matches the main objective of the PR: applying reduced-cost strengthening to improve variable bounds and fix variables globally.
Description check ✅ Passed The description is related to the changeset, explaining the core feature (reduced-cost strengthening at root node), additional refactoring (file renames), and gap calculation changes, aligning with the actual changes in the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
cpp/src/mip_heuristics/solver.cu (1)

226-236: ⚠️ Potential issue | 🟠 Major

Use one normalization for reduced_cost_strengthening.

Lines 233-236 overwrite the fallback from Lines 226-228, so -1 still resolves to 2 and values < -1 pass through unchanged. If automatic mode is meant to pick the new incumbent-update behavior, it never will; if it is meant to stay at 2, the new assignment is dead code.

As per coding guidelines, "Validate correct initialization of variable bounds, constraint coefficients, and algorithm state before solving; ensure reset when transitioning between algorithm phases (presolve, simplex, diving, crossover)."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/mip_heuristics/solver.cu` around lines 226 - 236, The two assignments
to branch_and_bound_settings.reduced_cost_strengthening conflict; normalize
context.settings.reduced_cost_strengthening once and assign that result. Compute
a local rc = context.settings.reduced_cost_strengthening and normalize like: if
rc == -1 set rc_normalized = 2; else if rc >= 0 set rc_normalized = rc; else set
rc_normalized = 3; then set branch_and_bound_settings.reduced_cost_strengthening
= rc_normalized (replace both existing assignments). This uses the unique
symbols branch_and_bound_settings.reduced_cost_strengthening and
context.settings.reduced_cost_strengthening so the logic is clear and
deterministic.
🧹 Nitpick comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

449-496: Consider clarifying the snapshot-based bounds update pattern.

The code copies original_lp_.lower/upper under mutex_original_lp_, then performs reduced cost fixing on this snapshot, and finally applies updates via update_root_bounds. If bounds are modified by another thread between the copy and the update, those intermediate changes would be overwritten.

This appears intentional for correctness (reduced cost fixing based on consistent snapshot), but a brief comment explaining this design would help future maintainers understand the rationale.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 449 - 496, The
code takes a snapshot of original_lp_.lower and original_lp_.upper under
mutex_original_lp_, uses that snapshot for find_reduced_cost_fixings, and then
calls update_root_bounds to apply the snapshot-derived changes; add a short
explanatory comment near the block that copies original_lp_.lower/upper (around
mutex_original_lp_.lock/unlock) clarifying that the snapshot is intentional to
ensure reduced-cost fixings are computed from a consistent view and that
update_root_bounds may overwrite any intermediate bound changes made by other
threads to preserve correctness; reference mutex_original_lp_,
original_lp_.lower/upper, find_reduced_cost_fixings, bounds_changed, and
update_root_bounds in the comment so future maintainers understand the
rationale.
cpp/src/branch_and_bound/worker.hpp (1)

144-149: Non-atomic check-and-clear pattern for start_bounds_updated.

The check-and-clear of start_bounds_updated (lines 146-148) is not atomic. If broadcast_root_bounds_change() sets this flag between the check and the reset, the update notification could be lost for this iteration.

Given the use case (triggering bounds recomputation), missing one update is likely acceptable since it will be caught on the next iteration. However, if stronger guarantees are needed, consider using compare_exchange or document this as intentional.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/worker.hpp` around lines 144 - 149, The current
non-atomic check-and-clear of start_bounds_updated before setting
recompute_bounds can race with broadcast_root_bounds_change() and lose an
update; make the intent explicit by either converting start_bounds_updated to an
std::atomic<bool> and replacing the check-and-clear with an atomic
test-and-clear (e.g., compare_exchange or exchange) so the flag is consumed
atomically, or if the relaxed “lose-one-update-is-acceptable” behavior is
intended, add a clear comment next to start_bounds_updated and this block
documenting that choice; refer to start_bounds_updated, recompute_bounds, and
broadcast_root_bounds_change() when applying the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 493-494: Update the typo in the log message string "Bound
strenghtening failed when updating the bounds at the root node!" to "Bound
strengthening failed when updating the bounds at the root node!" wherever it
appears; locate the occurrences by searching for the exact message passed to
settings_.log.printf (and similar printf calls) in branch_and_bound.cpp and
replace "strenghtening" with "strengthening" (also update any identical messages
in other branches that set solver_status_ = mip_status_t::NUMERICAL so all
instances are consistent).
- Around line 1756-1765: The call to start_node.value()->check_variable_bounds
reads original_lp_.lower and original_lp_.upper without synchronization, causing
a potential data race with update_root_bounds; fix by acquiring
mutex_original_lp_ (or otherwise synchronizing) before accessing
original_lp_.lower/upper or by copying original_lp_.lower and original_lp_.upper
into local vectors under mutex_original_lp_ and then calling
check_variable_bounds with those copies; update the code around the check in
branch_and_bound.cpp (the block that calls check_variable_bounds on start_node)
to use mutex_original_lp_ or the captured copies to ensure safe concurrent
access.
- Around line 3166-3181: In solve_node_deterministic, reads of
original_lp_.lower and original_lp_.upper (used in calls to
node_ptr->get_variable_bounds and node_ptr->update_branched_variable_bounds) are
racy with update_root_bounds/set_new_solution; protect these accesses by
acquiring mutex_original_lp_ (the same mutex used in update_root_bounds) around
the region that reads original_lp_.lower/upper, or take a local snapshot of
original_lp_.lower and original_lp_.upper under mutex at the start of
solve_node_deterministic and use the snapshot for the subsequent calls so
node_ptr->get_variable_bounds / node_ptr->update_branched_variable_bounds and
assignments to worker.leaf_problem.lower/upper are fed consistent, thread-safe
bounds.

In `@cpp/src/branch_and_bound/deterministic_workers.hpp`:
- Around line 316-326: enqueue_dive_node currently ignores the boolean result of
get_variable_bounds and always detaches and queues the node, which can enqueue
stale/invalid dive nodes; change enqueue_dive_node to check the return value of
get_variable_bounds (called on node with original_lp and filling
entry.resolved_lower/upper) and if it returns false, do not call
node->detach_copy() or push entry to dive_queue (free or leave node unchanged),
otherwise proceed to set entry.node = node->detach_copy() and
dive_queue.push_back(std::move(entry)); ensure you reference dive_queue_entry_t,
detach_copy, and dive_queue when making the check.

In `@cpp/src/branch_and_bound/mip_node.hpp`:
- Around line 104-120: The check_variable_bounds method in mip_node_t
incorrectly rejects ancestor bounds for variables that have been re-branched
deeper in the tree; update check_variable_bounds(branch_var, branch_var_lower,
branch_var_upper) to maintain a seen-variable set (e.g., a
std::unordered_set<int> seen_vars or a bitset keyed by branch_var) while walking
from this node up through parent pointers so that when you encounter an ancestor
with the same branch_var you skip that ancestor's bounds (i.e., ignore outdated
looser bounds) and only validate the first occurrence for each variable; apply
the same seen-variable logic to the analogous ancestor-check block referenced
around lines 165-179 to prevent false pruning after root tightening.

In `@cpp/src/branch_and_bound/worker_pool.hpp`:
- Around line 83-91: broadcast_root_bounds_change() currently only updates
start_bounds_updated for workers with search_strategy == BEST_FIRST; change it
to mark start_bounds_updated = true for any active worker so diving strategies
(PSEUDOCOST_DIVING, LINE_SEARCH_DIVING, GUIDED_DIVING, COEFFICIENT_DIVING) also
receive incumbent-driven root-bound updates. In practice, inside
broadcast_root_bounds_change() keep the is_initialized guard, iterate workers_
and for each worker where workers_[i]->is_active set
workers_[i]->start_bounds_updated = true (remove the BEST_FIRST-only condition);
this ensures all strategies returned by
get_search_strategies()/get_max_workers() get the update.

---

Outside diff comments:
In `@cpp/src/mip_heuristics/solver.cu`:
- Around line 226-236: The two assignments to
branch_and_bound_settings.reduced_cost_strengthening conflict; normalize
context.settings.reduced_cost_strengthening once and assign that result. Compute
a local rc = context.settings.reduced_cost_strengthening and normalize like: if
rc == -1 set rc_normalized = 2; else if rc >= 0 set rc_normalized = rc; else set
rc_normalized = 3; then set branch_and_bound_settings.reduced_cost_strengthening
= rc_normalized (replace both existing assignments). This uses the unique
symbols branch_and_bound_settings.reduced_cost_strengthening and
context.settings.reduced_cost_strengthening so the logic is clear and
deterministic.

---

Nitpick comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 449-496: The code takes a snapshot of original_lp_.lower and
original_lp_.upper under mutex_original_lp_, uses that snapshot for
find_reduced_cost_fixings, and then calls update_root_bounds to apply the
snapshot-derived changes; add a short explanatory comment near the block that
copies original_lp_.lower/upper (around mutex_original_lp_.lock/unlock)
clarifying that the snapshot is intentional to ensure reduced-cost fixings are
computed from a consistent view and that update_root_bounds may overwrite any
intermediate bound changes made by other threads to preserve correctness;
reference mutex_original_lp_, original_lp_.lower/upper,
find_reduced_cost_fixings, bounds_changed, and update_root_bounds in the comment
so future maintainers understand the rationale.

In `@cpp/src/branch_and_bound/worker.hpp`:
- Around line 144-149: The current non-atomic check-and-clear of
start_bounds_updated before setting recompute_bounds can race with
broadcast_root_bounds_change() and lose an update; make the intent explicit by
either converting start_bounds_updated to an std::atomic<bool> and replacing the
check-and-clear with an atomic test-and-clear (e.g., compare_exchange or
exchange) so the flag is consumed atomically, or if the relaxed
“lose-one-update-is-acceptable” behavior is intended, add a clear comment next
to start_bounds_updated and this block documenting that choice; refer to
start_bounds_updated, recompute_bounds, and broadcast_root_bounds_change() when
applying the change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 13a663fd-c164-4372-9e91-020f8604d80c

📥 Commits

Reviewing files that changed from the base of the PR and between d531ad1 and 23b9bb9.

📒 Files selected for processing (10)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/branch_and_bound.hpp
  • cpp/src/branch_and_bound/deterministic_workers.hpp
  • cpp/src/branch_and_bound/mip_node.hpp
  • cpp/src/branch_and_bound/pseudo_costs.hpp
  • cpp/src/branch_and_bound/reduced_cost_fixing.hpp
  • cpp/src/branch_and_bound/worker.hpp
  • cpp/src/branch_and_bound/worker_pool.hpp
  • cpp/src/dual_simplex/simplex_solver_settings.hpp
  • cpp/src/mip_heuristics/solver.cu

@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 1862e36

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
cpp/src/branch_and_bound/reduced_cost_fixing.hpp (1)

32-33: Potential narrowing conversion from size_t to i_t.

reduced_costs.size() returns size_t, which is assigned to i_t. If i_t is int, this could truncate on very large problems. Consider using static_cast<i_t> to make the conversion explicit, or validate that the size fits within the range of i_t.

🔧 Proposed fix
-  i_t num_cols_to_check = reduced_costs.size();  // Reduced costs will be smaller than the original
-                                                 // problem because we have added slacks for cuts
+  i_t num_cols_to_check = static_cast<i_t>(reduced_costs.size());  // Reduced costs will be smaller than the original
+                                                                   // problem because we have added slacks for cuts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/reduced_cost_fixing.hpp` around lines 32 - 33, The
assignment to i_t num_cols_to_check = reduced_costs.size() risks a narrowing
conversion from size_t to i_t; change this to an explicit conversion and guard:
compute size_t rc_size = reduced_costs.size(), assert or check rc_size fits in
the range of i_t, then assign using num_cols_to_check =
static_cast<i_t>(rc_size). Update references to num_cols_to_check accordingly
and/or handle the overflow case (throw or clamp) so the conversion is explicit
and safe.
cpp/src/branch_and_bound/worker.hpp (1)

114-128: Consider making original_lp parameter const.

The function only reads original_lp.lower and original_lp.upper to copy them into start_lower and start_upper. A const reference would clarify intent and prevent accidental modifications.

🔧 Proposed fix
-  bool init_best_first(mip_node_t<i_t, f_t>* node, lp_problem_t<i_t, f_t>& original_lp)
+  bool init_best_first(mip_node_t<i_t, f_t>* node, const lp_problem_t<i_t, f_t>& original_lp)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/worker.hpp` around lines 114 - 128, The
init_best_first function should take original_lp as a const reference since it
only reads original_lp.lower and original_lp.upper; change the signature to
init_best_first(mip_node_t<i_t, f_t>* node, const lp_problem_t<i_t, f_t>&
original_lp) and update any callers accordingly (ensure
mip_node_t::check_variable_bounds still accepts const lower/upper or pass
originals unchanged); this clarifies intent and prevents accidental modification
of original_lp inside init_best_first.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 1773-1781: Call the same mutex around reads of original_lp_ before
invoking init_best_first to avoid the race: wrap access to original_lp_ with the
same mutex used in update_root_bounds (mutex_original_lp_) — mirroring the
init_diving pattern — then call worker->init_best_first(start_node.value(),
original_lp_) while holding the lock and release it immediately after; ensure
the lock scope is minimal and only protects reading original_lp_ to match the
concurrency pattern used elsewhere.

---

Nitpick comments:
In `@cpp/src/branch_and_bound/reduced_cost_fixing.hpp`:
- Around line 32-33: The assignment to i_t num_cols_to_check =
reduced_costs.size() risks a narrowing conversion from size_t to i_t; change
this to an explicit conversion and guard: compute size_t rc_size =
reduced_costs.size(), assert or check rc_size fits in the range of i_t, then
assign using num_cols_to_check = static_cast<i_t>(rc_size). Update references to
num_cols_to_check accordingly and/or handle the overflow case (throw or clamp)
so the conversion is explicit and safe.

In `@cpp/src/branch_and_bound/worker.hpp`:
- Around line 114-128: The init_best_first function should take original_lp as a
const reference since it only reads original_lp.lower and original_lp.upper;
change the signature to init_best_first(mip_node_t<i_t, f_t>* node, const
lp_problem_t<i_t, f_t>& original_lp) and update any callers accordingly (ensure
mip_node_t::check_variable_bounds still accepts const lower/upper or pass
originals unchanged); this clarifies intent and prevents accidental modification
of original_lp inside init_best_first.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: df393de4-b16d-4e5a-af7a-d5c4777fbe32

📥 Commits

Reviewing files that changed from the base of the PR and between 23b9bb9 and 1862e36.

📒 Files selected for processing (5)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/reduced_cost_fixing.hpp
  • cpp/src/branch_and_bound/worker.hpp
  • cpp/src/branch_and_bound/worker_pool.hpp
  • cpp/src/dual_simplex/simplex_solver_settings.hpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/dual_simplex/simplex_solver_settings.hpp

@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test bd1ad23

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
cpp/src/branch_and_bound/branch_and_bound.cpp (1)

3137-3156: ⚠️ Potential issue | 🔴 Critical

Protect deterministic root-bound reads with mutex_original_lp_.

The FIXME is still real: Line 3146 and Line 3152 read original_lp_.lower/upper without synchronization, while other threads update those vectors under mutex_original_lp_. This is a data race in deterministic solve.

🔧 Minimal fix (snapshot bounds under mutex, then use snapshot)
+  std::vector<f_t> start_lower;
+  std::vector<f_t> start_upper;
+  mutex_original_lp_.lock();
+  start_lower = original_lp_.lower;
+  start_upper = original_lp_.upper;
+  mutex_original_lp_.unlock();
+
   if (worker.recompute_bounds_and_basis) {
-    feasible = node_ptr->get_variable_bounds(original_lp_.lower,
-                                             original_lp_.upper,
+    feasible = node_ptr->get_variable_bounds(start_lower,
+                                             start_upper,
                                              worker.leaf_problem.lower,
                                              worker.leaf_problem.upper,
                                              worker.bounds_changed);
   } else {
-    feasible = node_ptr->update_branched_variable_bounds(original_lp_.lower,
-                                                         original_lp_.upper,
+    feasible = node_ptr->update_branched_variable_bounds(start_lower,
+                                                         start_upper,
                                                          worker.leaf_problem.lower,
                                                          worker.leaf_problem.upper,
                                                          worker.bounds_changed);
   }
As per coding guidelines, "Ensure race conditions are absent in multi-GPU code and multi-threaded server implementations; verify proper synchronization of shared state."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 3137 - 3156, The
code in solve_node_deterministic currently reads original_lp_.lower and
original_lp_.upper without synchronization causing a data race with updates
performed under mutex_original_lp_ (e.g., in update_root_bounds called from
set_new_solution); to fix, inside solve_node_deterministic acquire
mutex_original_lp_ (or better: take a local snapshot) before any access to
original_lp_.lower/upper, copy them into local vectors (e.g.,
snapshot_lower/snapshot_upper), release the mutex, and then use
snapshot_lower/snapshot_upper in the calls to node_ptr->get_variable_bounds and
node_ptr->update_branched_variable_bounds instead of original_lp_.lower/upper so
no unsynchronized reads occur.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Line 2650: Deterministic mode currently skips reduced-cost fixing (comment at
branch_and_bound.cpp) so new incumbents there don't call
root_reduced_cost_fixing(...); modify the incumbent-update path used in
deterministic runs to invoke root_reduced_cost_fixing(...) with the same
parameters as the non-deterministic branch (ensure you call it from the
deterministic incumbent-update routine that updates the incumbent and root
bounds), preserve any guard checks or flags around reduced-cost fixing, and
mirror any logging or error handling present in the existing non-deterministic
invocation to keep behavior consistent.
- Around line 3762-3772: The calls to node_ptr->get_variable_bounds and
node_ptr->update_branched_variable_bounds ignore their boolean feasibility
return values; change the deterministic-dive logic to check those returns and
abort the dive when they indicate infeasibility. Specifically, after calling
node_ptr->get_variable_bounds(...) and
node_ptr->update_branched_variable_bounds(...), test the returned bool and if
false stop the current dive flow (e.g., set the dive/leaf infeasible flag,
undo/clean any partial state for worker.leaf_problem and worker.bounds_changed,
and return or jump to the existing infeasible-handling path used elsewhere in
this routine). Ensure you reference the same variables (worker.dive_lower,
worker.dive_upper, worker.leaf_problem.lower/upper, worker.bounds_changed) so
the existing cleanup and logging are used consistently.

---

Duplicate comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 3137-3156: The code in solve_node_deterministic currently reads
original_lp_.lower and original_lp_.upper without synchronization causing a data
race with updates performed under mutex_original_lp_ (e.g., in
update_root_bounds called from set_new_solution); to fix, inside
solve_node_deterministic acquire mutex_original_lp_ (or better: take a local
snapshot) before any access to original_lp_.lower/upper, copy them into local
vectors (e.g., snapshot_lower/snapshot_upper), release the mutex, and then use
snapshot_lower/snapshot_upper in the calls to node_ptr->get_variable_bounds and
node_ptr->update_branched_variable_bounds instead of original_lp_.lower/upper so
no unsynchronized reads occur.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c0a41f46-5668-44fa-b9d1-229adfafd6f4

📥 Commits

Reviewing files that changed from the base of the PR and between bd1ad23 and b52277f.

📒 Files selected for processing (3)
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/branch_and_bound.hpp
  • cpp/src/branch_and_bound/deterministic_workers.hpp

@nguidotti nguidotti force-pushed the reduced-cost-fixing branch from 6114f7d to e5e7d26 Compare March 16, 2026 12:44
@nguidotti nguidotti removed the do not merge Do not merge if this flag is set label Mar 25, 2026
@nguidotti nguidotti changed the title [WIP] Reduced cost strenghtening after incumbent update Reduced cost strenghtening Mar 25, 2026
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 14a8712

@chris-maes chris-maes added the P2 label Mar 25, 2026
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test da6b8fa

…unction).

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti nguidotti requested review from a team as code owners March 30, 2026 15:46
@nguidotti nguidotti requested a review from tmckayus March 30, 2026 15:46
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test d1ca630

@nguidotti nguidotti marked this pull request as draft March 30, 2026 15:48
@nguidotti nguidotti marked this pull request as draft March 30, 2026 15:48
@nguidotti nguidotti marked this pull request as ready for review March 31, 2026 08:36
Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test fe672d3

…fixing

# Conflicts:
#	cpp/src/branch_and_bound/branch_and_bound.cpp
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test 18d6474

Signed-off-by: Nicolas Guidotti <224634272+nguidotti@users.noreply.github.com>
@nguidotti
Copy link
Copy Markdown
Contributor Author

/ok to test c785f08

@nguidotti
Copy link
Copy Markdown
Contributor Author

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (2)
cpp/src/branch_and_bound/branch_and_bound.cpp (2)

3182-3201: ⚠️ Potential issue | 🔴 Critical

Resolve unsynchronized original_lp_ bounds reads in deterministic node solve.

Lines 3191-3201 read original_lp_.lower/upper without synchronization while other paths update them under mutex_original_lp_.

🔒 Proposed fix
-  if (worker.recompute_bounds_and_basis) {
-    feasible = node_ptr->get_variable_bounds(original_lp_.lower,
-                                             original_lp_.upper,
+  std::vector<f_t> root_lower;
+  std::vector<f_t> root_upper;
+  {
+    std::lock_guard<std::mutex> lock(mutex_original_lp_);
+    root_lower = original_lp_.lower;
+    root_upper = original_lp_.upper;
+  }
+
+  if (worker.recompute_bounds_and_basis) {
+    feasible = node_ptr->get_variable_bounds(root_lower,
+                                             root_upper,
                                              worker.leaf_problem.lower,
                                              worker.leaf_problem.upper,
                                              worker.bounds_changed);
   } else {
-    feasible = node_ptr->update_branched_variable_bounds(original_lp_.lower,
-                                                         original_lp_.upper,
+    feasible = node_ptr->update_branched_variable_bounds(root_lower,
+                                                         root_upper,
                                                          worker.leaf_problem.lower,
                                                          worker.leaf_problem.upper,
                                                          worker.bounds_changed);
   }

As per coding guidelines, "Ensure race conditions are absent in multi-threaded server implementations; verify proper synchronization of shared state."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 3182 - 3201,
solve_node_deterministic currently reads original_lp_.lower and
original_lp_.upper unsafely when calling node_ptr->get_variable_bounds or
node_ptr->update_branched_variable_bounds; protect those accesses by acquiring
mutex_original_lp_ (the same mutex used by update_root_bounds) before reading
the bounds or, better, under mutex_original_lp_ take a thread-local snapshot
(copy original_lp_.lower and original_lp_.upper into local vectors of the
correct size) at the start of solve_node_deterministic and then pass those local
copies to node_ptr->get_variable_bounds /
node_ptr->update_branched_variable_bounds (so worker.recompute_bounds_and_basis
logic uses the snapshot), ensuring no repeated locking and preserving
deterministic behavior for reduced-cost strengthening.

2695-2695: ⚠️ Potential issue | 🟠 Major

Deterministic mode still bypasses incumbent-triggered root reduced-cost fixing.

Line 2695 confirms this gap, and deterministic incumbent updates at Lines 3327-3332 and 3490-3494 only update upper_bound_/incumbent_ without calling root_reduced_cost_fixing(...).

As per coding guidelines, "Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results."

Also applies to: 3327-3332, 3490-3494

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` at line 2695,
Deterministic-mode incumbent updates are currently only setting upper_bound_ and
incumbent_ without invoking root_reduced_cost_fixing(...), so add a call to
root_reduced_cost_fixing(...) immediately after the deterministic-path updates
to upper_bound_/incumbent_ (the same call used in the non-deterministic path) to
enable incumbent-triggered root reduced-cost fixing; ensure you use the same
parameters/guards (and any mutex/flag checks) as the existing non-deterministic
invocation to preserve thread-safety and behavior, updating the sites that set
upper_bound_/incumbent_ in the deterministic flow (the blocks corresponding to
the incumbent updates around the two deterministic branches) to include that
call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 420-426: The broadcast of root-bound updates happens before
verifying the strengthening result; move the call to
worker_pool_.broadcast_root_bounds_change() so it only executes when feasible is
true. Specifically, after computing feasible, keep the failure handling that
sets solver_status_ = mip_status_t::NUMERICAL and logs via settings_.log.printf,
and call worker_pool_.broadcast_root_bounds_change() only in the success path
(when feasible is true) to avoid pushing invalid root bounds to active workers.

In `@cpp/src/branch_and_bound/reduced_cost_fixing.hpp`:
- Around line 33-43: The loop uses reduced_costs.size() to drive indexing but
directly accesses var_types, lower_bounds, upper_bounds and may read
out-of-bounds; add precondition checks before the loop to verify
reduced_costs.size() <= var_types.size(), lower_bounds.size(),
upper_bounds.size() (and that bounds_changed was sized appropriately), and if
not, either throw/log a clear error or clamp num_cols_to_check =
std::min({reduced_costs.size(), var_types.size(), lower_bounds.size(),
upper_bounds.size()}); then use that num_cols_to_check in the for loop (symbols:
reduced_costs, num_cols_to_check, var_types, lower_bounds, upper_bounds,
bounds_changed, threshold, variable_type_t) so all indexed accesses are safe.

---

Duplicate comments:
In `@cpp/src/branch_and_bound/branch_and_bound.cpp`:
- Around line 3182-3201: solve_node_deterministic currently reads
original_lp_.lower and original_lp_.upper unsafely when calling
node_ptr->get_variable_bounds or node_ptr->update_branched_variable_bounds;
protect those accesses by acquiring mutex_original_lp_ (the same mutex used by
update_root_bounds) before reading the bounds or, better, under
mutex_original_lp_ take a thread-local snapshot (copy original_lp_.lower and
original_lp_.upper into local vectors of the correct size) at the start of
solve_node_deterministic and then pass those local copies to
node_ptr->get_variable_bounds / node_ptr->update_branched_variable_bounds (so
worker.recompute_bounds_and_basis logic uses the snapshot), ensuring no repeated
locking and preserving deterministic behavior for reduced-cost strengthening.
- Line 2695: Deterministic-mode incumbent updates are currently only setting
upper_bound_ and incumbent_ without invoking root_reduced_cost_fixing(...), so
add a call to root_reduced_cost_fixing(...) immediately after the
deterministic-path updates to upper_bound_/incumbent_ (the same call used in the
non-deterministic path) to enable incumbent-triggered root reduced-cost fixing;
ensure you use the same parameters/guards (and any mutex/flag checks) as the
existing non-deterministic invocation to preserve thread-safety and behavior,
updating the sites that set upper_bound_/incumbent_ in the deterministic flow
(the blocks corresponding to the incumbent updates around the two deterministic
branches) to include that call.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 360f9cd0-28f2-4dba-baae-1949326fd4c0

📥 Commits

Reviewing files that changed from the base of the PR and between b52277f and c785f08.

📒 Files selected for processing (10)
  • cpp/src/branch_and_bound/CMakeLists.txt
  • cpp/src/branch_and_bound/branch_and_bound.cpp
  • cpp/src/branch_and_bound/branch_and_bound.hpp
  • cpp/src/branch_and_bound/mip_node.cpp
  • cpp/src/branch_and_bound/mip_node.hpp
  • cpp/src/branch_and_bound/reduced_cost_fixing.hpp
  • cpp/src/branch_and_bound/worker.hpp
  • cpp/src/mip_heuristics/diversity/lns/rins.cu
  • cpp/src/mip_heuristics/diversity/recombiners/sub_mip.cuh
  • cpp/src/mip_heuristics/solver.cu
💤 Files with no reviewable changes (2)
  • cpp/src/branch_and_bound/CMakeLists.txt
  • cpp/src/branch_and_bound/mip_node.cpp
✅ Files skipped from review due to trivial changes (2)
  • cpp/src/mip_heuristics/solver.cu
  • cpp/src/branch_and_bound/branch_and_bound.hpp
🚧 Files skipped from review as they are similar to previous changes (1)
  • cpp/src/branch_and_bound/mip_node.hpp

Comment on lines +420 to +426
worker_pool_.broadcast_root_bounds_change();

if (!feasible) {
settings_.log.printf(
"Bound strengthening failed when updating the bounds at the root node!\n");
solver_status_ = mip_status_t::NUMERICAL;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Broadcast root-bound updates only after feasibility succeeds.

Line 420 broadcasts changes before Lines 423-425 can detect an invalid strengthening result. That can push invalid root bounds to active workers.

🔧 Proposed fix
-    worker_pool_.broadcast_root_bounds_change();
-
     if (!feasible) {
       settings_.log.printf(
         "Bound strengthening failed when updating the bounds at the root node!\n");
       solver_status_ = mip_status_t::NUMERICAL;
+    } else {
+      worker_pool_.broadcast_root_bounds_change();
     }

As per coding guidelines, "Validate algorithm correctness in optimization logic: simplex pivots, branch-and-bound decisions, routing heuristics, and constraint/objective handling must produce correct results."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/branch_and_bound.cpp` around lines 420 - 426, The
broadcast of root-bound updates happens before verifying the strengthening
result; move the call to worker_pool_.broadcast_root_bounds_change() so it only
executes when feasible is true. Specifically, after computing feasible, keep the
failure handling that sets solver_status_ = mip_status_t::NUMERICAL and logs via
settings_.log.printf, and call worker_pool_.broadcast_root_bounds_change() only
in the success path (when feasible is true) to avoid pushing invalid root bounds
to active workers.

Comment on lines +33 to +43
i_t num_cols_to_check = reduced_costs.size(); // Reduced costs will be smaller than the original
// problem because we have added slacks for cuts

bounds_changed.assign(lower_bounds.size(), false);

for (i_t j = 0; j < num_cols_to_check; j++) {
if (std::isfinite(reduced_costs[j]) && std::abs(reduced_costs[j]) > threshold) {
const f_t lower_j = lower_bounds[j];
const f_t upper_j = upper_bounds[j];
const bool is_integer =
var_types[j] == variable_type_t::INTEGER || var_types[j] == variable_type_t::BINARY;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add precondition checks for vector size consistency before indexed access.

Line 38 iterates by reduced_costs.size() but directly indexes var_types, lower_bounds, and upper_bounds. A mismatched caller can trigger out-of-bounds reads/writes.

🔧 Proposed fix
   i_t num_cols_to_check = reduced_costs.size();  // Reduced costs will be smaller than the original
                                                  // problem because we have added slacks for cuts
+
+  cuopt_assert(var_types.size() >= static_cast<size_t>(num_cols_to_check),
+               "var_types must be at least reduced_costs.size()");
+  cuopt_assert(lower_bounds.size() >= static_cast<size_t>(num_cols_to_check),
+               "lower_bounds must be at least reduced_costs.size()");
+  cuopt_assert(upper_bounds.size() >= static_cast<size_t>(num_cols_to_check),
+               "upper_bounds must be at least reduced_costs.size()");
+  cuopt_assert(lower_bounds.size() == upper_bounds.size(),
+               "lower_bounds and upper_bounds size mismatch");

As per coding guidelines, "Ensure variables and constraints are accessed from the correct problem context ... verify index mapping consistency across problem transformations."

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
i_t num_cols_to_check = reduced_costs.size(); // Reduced costs will be smaller than the original
// problem because we have added slacks for cuts
bounds_changed.assign(lower_bounds.size(), false);
for (i_t j = 0; j < num_cols_to_check; j++) {
if (std::isfinite(reduced_costs[j]) && std::abs(reduced_costs[j]) > threshold) {
const f_t lower_j = lower_bounds[j];
const f_t upper_j = upper_bounds[j];
const bool is_integer =
var_types[j] == variable_type_t::INTEGER || var_types[j] == variable_type_t::BINARY;
i_t num_cols_to_check = reduced_costs.size(); // Reduced costs will be smaller than the original
// problem because we have added slacks for cuts
cuopt_assert(var_types.size() >= static_cast<size_t>(num_cols_to_check),
"var_types must be at least reduced_costs.size()");
cuopt_assert(lower_bounds.size() >= static_cast<size_t>(num_cols_to_check),
"lower_bounds must be at least reduced_costs.size()");
cuopt_assert(upper_bounds.size() >= static_cast<size_t>(num_cols_to_check),
"upper_bounds must be at least reduced_costs.size()");
cuopt_assert(lower_bounds.size() == upper_bounds.size(),
"lower_bounds and upper_bounds size mismatch");
bounds_changed.assign(lower_bounds.size(), false);
for (i_t j = 0; j < num_cols_to_check; j++) {
if (std::isfinite(reduced_costs[j]) && std::abs(reduced_costs[j]) > threshold) {
const f_t lower_j = lower_bounds[j];
const f_t upper_j = upper_bounds[j];
const bool is_integer =
var_types[j] == variable_type_t::INTEGER || var_types[j] == variable_type_t::BINARY;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cpp/src/branch_and_bound/reduced_cost_fixing.hpp` around lines 33 - 43, The
loop uses reduced_costs.size() to drive indexing but directly accesses
var_types, lower_bounds, upper_bounds and may read out-of-bounds; add
precondition checks before the loop to verify reduced_costs.size() <=
var_types.size(), lower_bounds.size(), upper_bounds.size() (and that
bounds_changed was sized appropriately), and if not, either throw/log a clear
error or clamp num_cols_to_check = std::min({reduced_costs.size(),
var_types.size(), lower_bounds.size(), upper_bounds.size()}); then use that
num_cols_to_check in the for loop (symbols: reduced_costs, num_cols_to_check,
var_types, lower_bounds, upper_bounds, bounds_changed, threshold,
variable_type_t) so all indexed accesses are safe.

@chris-maes chris-maes modified the milestones: 26.04, 26.06 Apr 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality mip non-breaking Introduces a non-breaking change P2

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants