Skip to content

Implement a minimizer for INLA #513

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

Michal-Novomestsky
Copy link
Contributor

@Michal-Novomestsky Michal-Novomestsky commented Jun 10, 2025

Addresses #342.

This PR should add:

  • find_mode

Contingent on pymc-devs/pytensor#1182, as it uses pytensor.tensor.optimize.minimize to find the mode (and hessian at that point).

@Michal-Novomestsky
Copy link
Contributor Author

Michal-Novomestsky commented Jun 17, 2025

Currently there's a few outstanding TODOs. These are just issues getting quality-of-life features to work with pytensor - the actual algorithm itself works fine. Please find the TODOs listed as comments in the code, and use the code in test_find_mode as a reference (you can copy-paste the contents of test_find_mode straight into a jupyter notebook if you want to dig inside the variables and have a play around).

use_jac: bool = True,
use_hess: bool = False, # TODO Tbh we can probably just remove this arg and pass True to the minimizer all the time, but if this is the case, it will throw a warning when the hessian doesn't need to be computed for a particular optimisation routine.
optimizer_kwargs: dict | None = None,
) -> list[TensorLike]:
Copy link
Member

@ricardoV94 ricardoV94 Jun 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Signature is wrong, this function returns numpy arrays.

But why are you compiling a pytensor function and evaluating it? This seems to just be doing find_map? I imagine you wanted a symbolic mode not the numerical (evaluated) one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I felt that for the purposes of INLA, we only ever needed numerical values of mode and hess, but in truth, simply returning the compiled function probably makes this more versatile as well as eliminating a need for x0 and args (although these will likely need to be obtained somewhere later). I'll refactor it to return the compiled function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've refactored it to only return the compiled function.

model: pm.Model | None = None,
method: minimize_method = "BFGS",
use_jac: bool = True,
use_hess: bool = False, # TODO Tbh we can probably just remove this arg and pass True to the minimizer all the time, but if this is the case, it will throw a warning when the hessian doesn't need to be computed for a particular optimisation routine.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really sure why these are options here. Presumably, the minimization method itself knows what it needs and it's redundant to specify use_jac or use_hess here at all.

sigma_mu = rng.random()

coords = {"city": ["A", "B", "C"], "obs_idx": np.arange(n)}
with pm.Model(coords=coords) as model:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would try to make this test in pytensor directly if possible.

@Michal-Novomestsky
Copy link
Contributor Author

@ricardoV94 @jessegrabowski The unittests currently seem to be failing because the current release of pytensor doesn't have optimize in it yet. Would it be possible to make a point release to so we can merge this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants