Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance: Solver speed up + Parallel processing support #4881

Open
leehangyue opened this issue Feb 27, 2025 · 3 comments
Open

Performance: Solver speed up + Parallel processing support #4881

leehangyue opened this issue Feb 27, 2025 · 3 comments
Labels

Comments

@leehangyue
Copy link

Description

  1. Solver speed up (with test): shorter solve time with the fastest solver (in my case: DFNpouch2D IDAKLUSolver).
  2. Parallel processing support: performant interface for generating solution_list from inputs_list or to say
    {
    parameter_values_list +
    initial_condition_list (i.e. initial_soc_list) +
    operating_condition_list (experiment_list / current_function_list / other means of operating conditions)
    }

I'm expecting a fast solver like the IDAKLUSolver in pybamm==24.1, in the newer versions,
and I'm curious to know why the IDAKLUSolver in pybamm==25.1.1 is slower than pybamm==24.1

In my case, I'm simulating a lithium-ion battery with pybamm.lithium_ion.DFN, while setting options.dimensionality = 2 and options.thermal = x-lumped (resolving y and z spatial coordinates). During this, I noticed that the IDAKLUSolver in pybamm==25.1.1 is slower than in pybamm==24.1

With identical parameter values and operating conditions, I created simulation and solved with IDAKLUSolver and CasadiSolver (fast mode) for pybamm versions 24.1 and 25.1.1, respectively (2x2=4 cases in total).

The results are in the table below:

pybamm version 24.1 25.1.1
IDAKLU solved in 1.369 s 4.964 s
CASADI fast solved in 10.996 s 9.691 s

For the test code and outputs, see additional context for details.
The tests were run in virtual environments.

Motivation

I'm trying to solve multiple cases for parameter identification. Solver speed is critical to the overall time consumption.

With faster / built-in parallel processing, the community could benefit from faster parameterization, etc.

Possible Implementation

For faster solver:

Figure out why IDAKLUSolver in pybamm==24.1 is faster than pybamm==25.1.1 in the tested case,
Revert / merge part of the code to pybamm==24.1, or implement a new and fast solution, if possible
I've done a few tests for solver speed, see additional context for details.

For parallel processing,

Create an interface that accepts a case_list (collections of parameter_values, initial conditions, and operating conditions), convert the cases to native variables like parameter_values, and simulate them in parallel.
The interface returns a solution_list corresponding to the case_list.

If the geometries / discretization differes among the cases, discretize and initialize in each process / Actor.
Otherwise, discretize and initialize once, then change only the inputs for different cases.

Additional context

The test case implementation:

model = pybamm.lithium_ion.DFN(options={
    "surface form": "differential",
    "intercalation kinetics": "asymmetric Butler-Volmer",
    "dimensionality": 2,
    "cell geometry": "pouch",
    "thermal": "x-lumped",
})
params = model.default_parameter_values
params.update({
    'Negative electrode Butler-Volmer transfer coefficient': 0.7,
    'Positive electrode Butler-Volmer transfer coefficient': 0.3,
}, check_already_exists=False)
var_pts = {
    "x_n": 5,
    "x_s": 3,
    "x_p": 5,
    "r_n": 5,
    "r_p": 5,
    "z": 5,
    "y": 3,
}
t_in = np.linspace(0, 1000, 201)
C_in = 1 + (t_in > 300) * 2. - (t_in > 600) * 4.
initial_soc = 0.7
nom_cap = params.evaluate(params["Nominal cell capacity [A.h]"])
I_in = C_in * nom_cap
params["Current function [A]"] = pybamm.Interpolant(t_in, I_in, children=pybamm.t)
print("Creating simulation 1 (IDAKLU)...")
sim1 = pybamm.Simulation(model=model, parameter_values=params, var_pts=var_pts, solver=pybamm.IDAKLUSolver())
print("Solving 1 (IDAKLU)...")
sol1 = sim1.solve(t_eval=t_in, initial_soc=initial_soc)
print(f"Solved 1 (IDAKLU) in {sol1.solve_time}.")
print("Creating simulation 2 (CASADI fast)...")
sim2 = pybamm.Simulation(model=model, parameter_values=params, var_pts=var_pts, solver=pybamm.CasadiSolver('fast'))
print("Solving 2 (CASADI fast)...")
sol2 = sim2.solve(t_eval=t_in, initial_soc=initial_soc)
print(f"Solved 2 (CASADI fast) in {sol2.solve_time}.")

The outputs of the test case:

with pybamm==24.1

Creating simulation 1 (IDAKLU)...
Solving 1 (IDAKLU)...
.../site-packages/pybamm/models/full_battery_models/lithium_ion/electrode_soh.py:560: UserWarning: Q_Li=2.0793 Ah is greater than Q_p=1.9464 Ah.
  warnings.warn(f"Q_Li={Q_Li:.4f} Ah is greater than Q_p={Q_p:.4f} Ah.")
Solved 1 (IDAKLU) in 1.369 s.
Creating simulation 2 (CASADI fast)...
Solving 2 (CASADI fast)...
Solved 2 (CASADI fast) in 10.996 s.

Process finished with exit code 0

with pybamm=25.1.1

Creating simulation 1 (IDAKLU)...
Solving 1 (IDAKLU)...
Solved 1 (IDAKLU) in 4.964 s.
Creating simulation 2 (CASADI fast)...
Solving 2 (CASADI fast)...
Solved 2 (CASADI fast) in 9.691 s.

Process finished with exit code 0
@martinjrobins
Copy link
Contributor

hi @leehangyue. re 1., can you please post your code for the test case and I can investigate the slowdown you are experiencing in 25.1.1. For 2. you can pass in a list of input parameters to the solver and set the num_threads option (https://docs.pybamm.org/en/stable/source/api/solvers/idaklu_solver.html) to solve the simulations in parallel using openmp. For the case of input parameters that affect the discretisation this will not work however. There has been some recent work to address this (#4665)

@leehangyue
Copy link
Author

hi @leehangyue. re 1., can you please post your code for the test case and I can investigate the slowdown you are experiencing in 25.1.1. For 2. you can pass in a list of input parameters to the solver and set the num_threads option (https://docs.pybamm.org/en/stable/source/api/solvers/idaklu_solver.html) to solve the simulations in parallel using openmp. For the case of input parameters that affect the discretisation this will not work however. There has been some recent work to address this (#4665)

Thanks for your reply! I'll try your recommendation for 2. The code for the test case in 1 is in the Additional Context section, and I copied it below:

model = pybamm.lithium_ion.DFN(options={
    "surface form": "differential",
    "intercalation kinetics": "asymmetric Butler-Volmer",
    "dimensionality": 2,
    "cell geometry": "pouch",
    "thermal": "x-lumped",
})
params = model.default_parameter_values
params.update({
    'Negative electrode Butler-Volmer transfer coefficient': 0.7,
    'Positive electrode Butler-Volmer transfer coefficient': 0.3,
}, check_already_exists=False)
var_pts = {
    "x_n": 5,
    "x_s": 3,
    "x_p": 5,
    "r_n": 5,
    "r_p": 5,
    "z": 5,
    "y": 3,
}
t_in = np.linspace(0, 1000, 201)
C_in = 1 + (t_in > 300) * 2. - (t_in > 600) * 4.
initial_soc = 0.7
nom_cap = params.evaluate(params["Nominal cell capacity [A.h]"])
I_in = C_in * nom_cap
params["Current function [A]"] = pybamm.Interpolant(t_in, I_in, children=pybamm.t)
print("Creating simulation 1 (IDAKLU)...")
sim1 = pybamm.Simulation(model=model, parameter_values=params, var_pts=var_pts, solver=pybamm.IDAKLUSolver())
print("Solving 1 (IDAKLU)...")
sol1 = sim1.solve(t_eval=t_in, initial_soc=initial_soc)
print(f"Solved 1 (IDAKLU) in {sol1.solve_time}.")
print("Creating simulation 2 (CASADI fast)...")
sim2 = pybamm.Simulation(model=model, parameter_values=params, var_pts=var_pts, solver=pybamm.CasadiSolver('fast'))
print("Solving 2 (CASADI fast)...")
sol2 = sim2.solve(t_eval=t_in, initial_soc=initial_soc)
print(f"Solved 2 (CASADI fast) in {sol2.solve_time}.")

@martinjrobins
Copy link
Contributor

Your issue is that you are using way too many points in your interpolant, if you want to model a discontinuous step change in C_in you could do this much more cheaply by doing:

t_in = np.array([0, 300, 300.01, 600, 600.01, 900])

The solver needs to stop at each data point in the interpolant, so you want to make sure you have as few as possible

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants