Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,6 @@ For more complete examples, read these files:
| Basic MILP (curl) | `docs/cuopt/source/cuopt-server/examples/milp/examples/basic_milp_example.sh` | MILP shell script |
| Incumbent Callback | `docs/cuopt/source/cuopt-server/examples/milp/examples/incumbent_callback_example.py` | MIP progress tracking |
| Abort Job | `docs/cuopt/source/cuopt-server/examples/milp/examples/abort_job_example.py` | Canceling requests |
| Batch Mode | `docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh` | Multiple problems |
| Multiple LPs | `docs/cuopt/source/cuopt-server/examples/lp/examples/batch_mode_example.sh` | Multiple problems (sequential; batch deprecated) |

These examples are tested by CI (`ci/test_doc_examples.sh`) and represent canonical usage.
4 changes: 2 additions & 2 deletions ci/test_self_hosted_service.sh
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ export CUOPT_DATA_DIR
export CUOPT_RESULT_DIR

trap 'rm -rf "$CUOPT_DATA_DIR" "$CUOPT_RESULT_DIR"' EXIT
# cuopt_problem_data and other small problems should be less than 1k
export CUOPT_MAX_RESULT=1
# cuopt_problem_data and other small problems should be less than 2k (batch LP returns 2 results)
export CUOPT_MAX_RESULT=2
CERT_FOLDER=$(pwd)/python/cuopt_self_hosted/cuopt_sh_client/tests/utils/certs
export CUOPT_SSL_CERTFILE=${CERT_FOLDER}/server.crt
export CUOPT_SSL_KEYFILE=${CERT_FOLDER}/server.key
Expand Down
15 changes: 10 additions & 5 deletions docs/cuopt/source/cuopt-server/examples/lp-examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
LP Python Examples
===============================

The following example showcases how to use the ``CuOptServiceSelfHostClient`` to solve a simple LP problem in normal mode and batch mode (where multiple problems are solved at once).
The following example showcases how to use the ``CuOptServiceSelfHostClient`` to solve a simple LP problem in normal mode and with multiple problems (solved sequentially; batch mode is deprecated).

The OpenAPI specification for the server is available in :doc:`open-api spec <../../open-api>`. The example data is structured as per the OpenAPI specification for the server, please refer :doc:`LPData under "POST /cuopt/request" <../../open-api>` under schema section. LP and MILP share same spec.

Expand All @@ -15,10 +15,10 @@ If you want to run server locally, please run the following command in a termina
export port=5000
python -m cuopt_server.cuopt_service --ip $ip --port $port

.. _generic-example-with-normal-and-batch-mode:
.. _generic-example-with-normal-and-multiple-lps:

Genric Example With Normal Mode and Batch Mode
------------------------------------------------
Generic Example With Normal Mode and Multiple LPs (Batch Deprecated)
---------------------------------------------------------------------

:download:`basic_lp_example.py <lp/examples/basic_lp_example.py>`

Expand Down Expand Up @@ -402,7 +402,12 @@ In case the user needs to update solver settings through CLI, the option ``-ss``
export port=5000
cuopt_sh data.json -t LP -i $ip -p $port -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'

In the case of batch mode, you can send a bunch of ``mps`` files at once, and acquire results. The batch mode works only for ``mps`` in the case of CLI:
For solving multiple ``mps`` files, you can send them together (they will be
solved sequentially). This multi-file mode works only for ``mps`` in the case of
CLI.

.. note::
LP batch mode is deprecated. Multiple problems are now solved sequentially.

.. note::
Batch mode is not available for MILP problems.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
#!/bin/bash
# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Copyright (c) 2025-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# LP Batch Mode CLI Example
# Sequential LP Solve CLI Example
#
# This example demonstrates how to solve multiple LP problems in batch mode
# using MPS files with the cuopt_sh CLI tool.
# This example demonstrates how to solve multiple LP problems using MPS files
# with the cuopt_sh CLI tool. Multiple problems are solved sequentially.
#
# Note: Batch mode works only with MPS files in CLI and is not available for MILP.
# Note: LP batch mode is deprecated. Multiple problems are now solved
# sequentially rather than in parallel.
#
# Requirements:
# - cuOpt server running on localhost:5000
Expand Down Expand Up @@ -45,11 +46,11 @@ RHS
ENDATA
EOF

echo "=== Solving Multiple MPS Files in Batch Mode ==="
echo "=== Solving Multiple MPS Files (Sequential LP Solve) ==="
# Submit multiple MPS files at once
# -t LP: Problem type
# -ss: Solver settings (JSON format)
cuopt_sh "$mps_file" "$mps_file" "$mps_file" -t LP -i $ip -p $port -ss '{"tolerances": {"optimality": 0.0001}, "time_limit": 5}'

echo ""
echo "Note: Batch mode is only available for LP with MPS files, not for MILP."
echo "Note: Multiple LPs are solved sequentially."
7 changes: 5 additions & 2 deletions docs/cuopt/source/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -329,7 +329,9 @@ Linear Programming FAQs

.. dropdown:: How small and how many problems can I give when using the batch mode?

The batch mode allows solving many LPs in parallel to try to fully utilize the GPU when LP problems are too small. Using H100 SXM, the problem should be of at least 1K elements, and giving more than 100 LPs will usually not increase performance.
LP batch mode is deprecated. Multiple problems are now solved sequentially.
For parallelism, implement your own (e.g. ``concurrent.futures``) with
sequential ``Solve`` calls.

.. dropdown:: Can the solver run on dense problems?

Expand All @@ -349,7 +351,8 @@ Linear Programming FAQs
- Hardware: If using self-hosted, you should use a recent server-grade GPU. We recommend H100 SXM (not the PCIE version).
- Tolerance: The set tolerance usually has a massive impact on performance. Try the lowest possible value using ``set_optimality_tolerance`` until you have reached your lowest possible acceptable accuracy.
- PDLP Solver mode: PDLP solver mode will change the way PDLP internally optimizes the problem. The mode choice can drastically impact how fast a specific problem will be solved. You should test the different modes to see which one fits your problem best.
- Batch mode: In case you know upfront that you need to solve multiple LP problems, instead of solving them sequentially, you should use the batch mode which can solve multiple LPs in parallel.
- Multiple LPs: LP batch mode is deprecated. Solve multiple problems with
sequential ``Solve`` calls, or implement your own parallelism.
- Presolve: Presolve can reduce problem size and improve solve time.

.. dropdown:: What solver mode should I choose?
Expand Down
4 changes: 2 additions & 2 deletions docs/cuopt/source/lp-qp-features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@ Logging Callback in the Service

In the cuOpt service API, the ``log_file`` value in ``solver_configs`` is ignored.

If however you set the ``solver_logs`` flag on the ``/cuopt/request`` REST API call, users can fetch the log file content from the webserver at ``/cuopt/logs/{id}``. Using the logging callback feature through the cuOpt client is shown in :ref:`Examples <generic-example-with-normal-and-batch-mode>` on the self-hosted page.
If however you set the ``solver_logs`` flag on the ``/cuopt/request`` REST API call, users can fetch the log file content from the webserver at ``/cuopt/logs/{id}``. Using the logging callback feature through the cuOpt client is shown in :ref:`Examples <generic-example-with-normal-and-multiple-lps>` on the self-hosted page.


Infeasibility Detection
Expand All @@ -155,7 +155,7 @@ The user may specify a time limit to the solver. By default the solver runs unti
Batch Mode
----------

Users can submit a set of problems which will be solved in a batch. Problems will be solved at the same time in parallel to fully utilize the GPU. Checkout :ref:`self-hosted client <generic-example-with-normal-and-batch-mode>` example in thin client.
Users can submit a set of problems which will be solved in a batch. Problems will be solved at the same time in parallel to fully utilize the GPU. Checkout :ref:`self-hosted client <generic-example-with-normal-and-multiple-lps>` example in thin client.

Multi-GPU Mode
--------------
Expand Down
17 changes: 16 additions & 1 deletion python/cuopt/cuopt/linear_programming/solver/solver.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
# SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Copyright (c) 2023-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0

import os
import time
import warnings

from cuopt.linear_programming.solver import solver_wrapper
from cuopt.linear_programming.solver_settings import SolverSettings
Expand Down Expand Up @@ -111,6 +112,13 @@ def BatchSolve(data_model_list, solver_settings=None):
Solve the list of Linear Programs passed as input and returns the solutions
and total solve time.

.. deprecated::
LP BatchSolve is deprecated and will be removed in a future release.
It runs concurrent LPs in multiple C++ threads, which can be done
independently in user code. Use sequential :func:`Solve` calls instead,
e.g. ``[Solve(dm, solver_settings) for dm in data_model_list]``, or
implement your own parallelism (e.g. ``concurrent.futures``).

Data Model objects can be construed through setters
(see linear_programming.DataModel class) or through a MPS file
(see cuopt_mps_parser.ParseMps function)
Expand Down Expand Up @@ -179,6 +187,13 @@ def BatchSolve(data_model_list, solver_settings=None):
>>> # Print the value of one specific variable
>>> print(solution.get_vars()["var_name"])
"""
warnings.warn(
"LP BatchSolve is deprecated and will be removed in a future release. "
"Use sequential Solve() calls or implement your own parallelism "
"(e.g. concurrent.futures).",
DeprecationWarning,
stacklevel=2,
)
if solver_settings is None:
solver_settings = SolverSettings()

Expand Down
16 changes: 13 additions & 3 deletions python/cuopt/cuopt/tests/linear_programming/test_lp_solver.py
Original file line number Diff line number Diff line change
Expand Up @@ -477,11 +477,11 @@ def test_parser_and_batch_solver():
settings.set_parameter(CUOPT_METHOD, SolverMethod.PDLP)
settings.set_optimality_tolerance(1e-4)

# Call BatchSolve
# Call BatchSolve (deprecated; use sequential Solve instead)
batch_solution, solve_time = solver.BatchSolve(data_model_list, settings)

# Call Solve on each individual data model object
individual_solutions = [] * nb_solves
individual_solutions = []
for i in range(nb_solves):
individual_solution = solver.Solve(
cuopt_mps_parser.ParseMps(file_path), settings
Expand All @@ -494,6 +494,16 @@ def test_parser_and_batch_solver():
batch_solution[i].get_termination_status()
== individual_solutions[i].get_termination_status()
)
assert batch_solution[i].get_primal_objective() == pytest.approx(
individual_solutions[i].get_primal_objective(), rel=1e-6, abs=1e-8
)
assert np.array(
batch_solution[i].get_primal_solution()
) == pytest.approx(
np.array(individual_solutions[i].get_primal_solution()),
rel=1e-5,
abs=1e-7,
)


def test_warm_start():
Expand Down Expand Up @@ -570,7 +580,7 @@ def test_batch_solver_warm_start():

settings.set_pdlp_warm_start_data(solution.get_pdlp_warm_start_data())

# Should raise an exception
# Should raise an exception (BatchSolve does not support warmstart)
with pytest.raises(Exception):
solver.BatchSolve(data_model_list, settings)

Expand Down
2 changes: 1 addition & 1 deletion python/cuopt_self_hosted/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Check the help with 'cuopt_sh -h' for more detailed information.

data: cuOpt problem data file or a request id to repoll. If the -f option is used, this indicates the path of a file accessible to the server.
-id: space separated list of reqIds to use as initial solutions for VRP problems. The list is terminated by the next option flag or the end of line.
-wid: reqId of a solution to use as a warmstart for a single LP problem. Not enabled for batch LP problems.
-wid: reqId of a solution to use as a warmstart for a single LP problem. Not enabled when multiple LP problems are passed.
-ca: caches a problem on the server so that it may be run multiple times by reqId. Problem is not solved, only cached.
-f: Indicates that the DATA argument is the relative path of a cuOpt data file under the server's data directory.
-d: Deletes a cached problem or aborts a running or queued solution.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -721,18 +721,15 @@ def get_LP_solve(
Parameters
----------
cuopt_data_models :
Note - Batch mode is only supported in LP and not in MILP

File path to mps or json/dict/DataModel returned by
cuopt_mps_parser/list[mps file paths]/list[dict]/list[DataModel].

For single problem, input should be either a path to mps/json file,
/DataModel returned by cuopt_mps_parser/ path to json file/
dictionary.

For batch problem, input should be either a list of paths to mps
files/ a list of DataModel returned by cuopt_mps_parser/ a
list of dictionaries.
For multiple problems, a list of paths/dicts/DataModels may be
passed; they are solved sequentially (LP batch mode is deprecated).

To use a cached cuopt problem data, input should be a uuid
identifying the reqId of the cached data.
Expand Down
17 changes: 12 additions & 5 deletions python/cuopt_self_hosted/cuopt_sh_client/cuopt_sh.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
#! /usr/bin/python3

# SPDX-FileCopyrightText: Copyright (c) 2023-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-FileCopyrightText: Copyright (c) 2023-2026, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0


Expand Down Expand Up @@ -231,6 +231,14 @@ def read_input_data(i_file):
elif args.type == "LP":
if args.init_ids:
raise Exception("Initial ids are not supported for LP")
if (
isinstance(cuopt_problem_data, list)
and len(cuopt_problem_data) > 1
and args.warmstart_id
):
raise Exception(
"Warmstart id is only supported for a single LP problem"
)

def log_callback(name):
def print_log(log):
Expand Down Expand Up @@ -351,9 +359,8 @@ def main():
" "
"For LP: "
"A single problem file in mps/json format or file_name."
"Batch mode is supported in case of mps files only for LP and"
"not for MILP, where a list of mps"
"files can be shared to be solved in parallel.",
"Multiple mps files may be passed for LP; they are solved "
"sequentially (batch mode is deprecated).",
)
parser.add_argument(
"-id",
Expand All @@ -373,7 +380,7 @@ def main():
default=None,
help="reqId of a solution to use as a warmstart data for a "
"single LP problem. This allows to restart PDLP with a "
"previous solution context. Not enabled for Batch LP problem",
"previous solution context. Not enabled when multiple LP problems are passed.",
)
parser.add_argument(
"-ca",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -337,20 +337,29 @@ def create_solution(sol):
sol = None
total_solve_time = None
if type(LP_data) is list:
if len(LP_data) == 0:
raise HTTPException(
status_code=400, detail="LP_data list cannot be empty"
)
is_batch = True
data_model_list = []
warnings = []
warnings = [
"LP batch mode is deprecated. Multiple problems are now solved "
"sequentially. Implement your own parallelism if needed."
]
sol = []
total_solve_time = 0.0
for i_data in LP_data:
i_warnings, data_model = create_data_model(i_data)
data_model_list.append(data_model)
warnings.extend(i_warnings)
cswarnings, solver_settings = create_solver(
LP_data[0], warmstart_data
)
warnings.extend(cswarnings)
sol, total_solve_time = linear_programming.BatchSolve(
data_model_list, solver_settings
)
cswarnings, solver_settings = create_solver(
i_data, warmstart_data
)
warnings.extend(cswarnings)
i_sol = linear_programming.Solve(
data_model, solver_settings=solver_settings
)
total_solve_time += i_sol.get_solve_time()
sol.append(i_sol)
else:
warnings, data_model = create_data_model(LP_data)
cswarnings, solver_settings = create_solver(
Expand Down Expand Up @@ -382,7 +391,7 @@ def create_solution(sol):
if i_sol.get_error_status() != ErrorStatus.Success:
res.append(
{
"status": i_sol.get_error_status(),
"status": i_sol.get_error_status().name,
"solution": i_sol.get_error_message(),
}
)
Expand Down
2 changes: 1 addition & 1 deletion python/cuopt_server/cuopt_server/webserver.py
Original file line number Diff line number Diff line change
Expand Up @@ -931,7 +931,7 @@ async def postrequest(
default=None,
description="If set, the warmstart data in solution identified by id "
"will be used by the solver as warmstart data for this request. "
"Enabled for single LP problem, not enabled for Batch LP",
"Enabled for single LP problem. Batch LP is deprecated.",
),
validation_only: Optional[bool] = Query(
default=False,
Expand Down
Loading