Skip to content

Conversation

@SteveBronder
Copy link
Collaborator

Summary

This PR makes the following changes for the laplace approximation:

  1. Adds a wolfe line search to the Newton solver used in the laplace approximation to improve convergence.
  • The example code provided from Laplace Bug when passing Eigen::Map in tuple of functor arguments #3205 fails on develop. The issue arose that the initial value of 0 for theta started the model in the tail of the distribution. The quick line search we did which only tested half of a newton step was not robust enough for this model to reach convergance. This PR adds a full wolfe line search to the Newton solver used in the laplace approximation to improve convergence in such cases.
    The graphic below shows the difference in estimates of the log likelihood for laplace relative to integrate_1d on the roach test data plotted along the mu and sigma estimates. There is still a bias relative to integrate_1d as mu becomes negative and sigma becomes larger, but it is much nicer than before.
image
  • The main loop for laplace_marginal_density_est is expensive as it requires calculating either a diagonal hessian or block diagonal hessian with 2nd order autodiff. The wolfe line search only requires the gradients of the likelihood with respect to theta. So with that in mind the wolfe line search tries pretty aggressively get the best step size. If our initial step size is successful, we try to keep doubling until we hit a step size where the strong wolfe conditions fail and then return the information for the step right before that failure. If our initial step size does not satisfy strong wolfe then we do a bracketed zoom with cubic interpolation until till we find a step size that satisfies the strong wolfe conditions.
    Tests for the wolfe line search are added to test/unit/math/laplace/wolfe_line_search.hpp.
  1. Fixes bugs in the laplace approximation
  • Fix iteration mismatch between values when line search succeeds
    In the last iteration of the laplace approximation we were returning the negative block diagonal hessian and derived matrices from the previous search. This is fine if the line search in that last step failed. But if the line search succeeds then we need to go back and recalculate the negative block diagonal hessian and it's derived quantities.
  • Breakup diagonal and block hessian functions
    Previously we had one block_hessian function that calculated both the block hessian or the diagonal hessian at runtime. But this function is only used in places where we know at compile time whether we want a block or diagonal hessian. So I split out the two functions to avoid unnecessary runtime branching.
  • barzilai_borwein_step_size
    For an initial step size estimate before each line search we use the Barzilai-Borwein method to get an estimate.
  • Adjoints of ll args only calculated once
    Previously we calculated them eargerly in each laplace iteration. But they are not needed within the inner loop so we wait till we finish the inner search then calculate their adjoints once afterwards.
  • Calculate covariance once at the start and reuse throughout.
    We were calculating the covariance matrix from inside of laplace_density_est, but this required us to then return it from that function and imo looked weird. So I pulled it out and now laplace_marginal_density_est is passed the covariance matrix.
  1. Fixes numerical stability in laplace distributions
    There were a few places where we could use log_sum_exp etc. so I made those changes.
  2. Fixes "bug" in finite difference step size calculation
  • Changed from cube root of epsilon to epsilon^(1/7) for 6th order
    The finite difference method in Stan was previously using stepsize optimzied a 2nd order method. But the code is a 6th order method. I modified finite_diff_stepsize to use epsilon^(1/7) instead of cbrt(epsilon). With this change all of the laplace tests pass with a much higher tolerance for precision.

Tests

All the AD tests now have a tighter tolerance for the laplace approximation.
There are also tests for the wolfe line search in test/unit/math/laplace/wolfe_line_search.hpp.

./runTests.py test/unit/math/laplace

Release notes

Improve laplace approximation with wolfe line search and bug fixes.

Checklist

  • Copyright holder: Steve Bronder

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@SteveBronder SteveBronder changed the title Fix/wolfe zoom1 Add Wolfe line search to Laplace approximation Oct 24, 2025
@charlesm93 charlesm93 self-assigned this Oct 28, 2025
@charlesm93
Copy link
Contributor

charlesm93 commented Oct 28, 2025

I'll have a stab at reviewing the code but from the description this looks like an amazing PR! A few initial questions:

  • The Barzilai Borwein method is used only if line_search = TRUE, right?
  • Before this PR, was the covariance matrix being computed over and over?
  • What's the finite difference step size calculation? Is this for unit tests?
  • Do we have some performance tests for other problems to check how runtimes are affected by these changes?

@SteveBronder
Copy link
Collaborator Author

SteveBronder commented Oct 28, 2025

The Barzilai Borwein method is used only if line_search = TRUE, right?

So currently line search is always on. If we have line search off should we always just be taking one full newton step each iteration? My thought process was we should always have it on since it only requires calculating the gradient with respect to theta. sometimes we can actually get away with even taking 2x a newton step. And for some functions, like Aki's example, we need to have a very small stepsize at first but then can just back to 1x steps after.

Before this PR, was the covariance matrix being computed over and over?

No before it was just being computed in laplace_marginal_density_est which is pretty low level. I think it looks nicer to compute it before we go into laplace_marginal_density_est.

What's the finite difference step size calculation? Is this for unit tests?

That was a change for the unit tests. Though they were breaking other tests so I'm going to revert those. I think that needs to be looked at inside of another PR.

Do we have some performance tests for other problems to check how runtimes are affected by these changes?

No we do not. If I have time I'd like to get add some logging and compare the previous and current implementation in terms of time spent. I have mostly just been looking at the unit tests and ballparking if they seem to go slower or faster when I run them. But yes actual performance tests would be nice.

…y values have small values that lose accuracy with finite diff
@SteveBronder
Copy link
Collaborator Author

@charlesm93 another Q. If we fail to converge after N iterations should we throw a hard error or return back what we have so far with a warning? I'm thinking about how for wolfe we just return with a warning instead of throwing away the results.

@charlesm93
Copy link
Contributor

So currently line search is always on. If we have line search off should we always just be taking one full newton step each iteration?

Yes, with no linesearch you take a full Newton step. As to whether linesearch should always be on, it depends on how it affects performance in cases where no linesearch is needed. Most of the unit tests we have don't require it, and so in those cases we don't want to increase runtime. So I'm in favor of keeping it optional, unless the increase in runtime is marginal.

I guess that speaks to my last point about having performance tests.

If we fail to converge after N iterations should we throw a hard error or return back what we have so far with a warning?

We should reject the current metropolis proposal.

Copy link
Contributor

@charlesm93 charlesm93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made some comments. I'll need some help clarifying some of the code. We should find a time to go over the different strategies of the Wolfe linesearch and also the unit tests for the Wolfe linsearch.

(I didn't review the finite diff changes.)

}

/**
* Computes theta gradient `f` wrt `theta` and `args...`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the gradients are not only with respect to theta (the latent Gaussian variables), we might want to change the name theta gradients here to simply gradients of the log likelihood.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to add a note to clarify this. If a user passes in args that are reverse mode autodiff types then this would compute the gradients wrt both theta and args. But if the user passes in args that are not autodiff then we still compute the gradient wrt theta.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main point of the function is to compute theta's gradients, which is why I make the name theta_grad. The args having their gradients calculated are more of a side effect.

}

/**
* Computes likelihood argument gradient of `f`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so is this only differentiating wrt args but not theta?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

}

/**
* Computes negative block diagonal Hessian of `f` wrt`theta` and `args...`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change the description slightly to distinguish from previous function and indicate that one handles the diagonal case and one doesn't.

/* iterations end when difference in objective function is less than tolerance
/**
* iterations end when difference in objective function is less than tolerance
* Default is sqrt(machine_epsilon)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's very interesting... Did you find that reducing the tolerance (from 1e-6 to sqrt(machine_epsilon)) and increasing max step size improves performance on difficult problems (i.e. the roach problem)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I moved this down because some of the tests were failing when doing the finite difference checks comparing the gradients. Also boost's integrators use sqrt machine epsilon as the default tolerance and it seemed like a reasonable number.

* negative steps; descent is enforced by the line search.
* @warning The vectors must have identical size. Non-finite inputs yield the
* safe fallback.
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide a reference? I'd like to discuss this method---this is not something I'm familiar with and I want to make sure I understand the details.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes happy to chat about it. I found it via google and a bit of chatgpt so I can't say I'm an expert. It's a heuristic to find a decent start for the next line search.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. We should have a reference that we can point other developers to, maybe even a short doc we write ourselves. This doesn't seem like an overly complicated procedure and writing it up should be straightforward and help us review the implementation of the procedure.

*
* The line search delegates *all* numerical work to a user-supplied callable
* `update_fun` that evaluates the objective and its directional derivative at a
* trial step and prepares the candidate state for acceptance.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused here. So the Stan user needs to provide UpdateFun or is all this still happening internally?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is all happening internally. We don't expose the wolfe line search to the user.

} // namespace internal

} // namespace stan::math
#endif
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like there is a lot happening in this function. It would be worth having a design doc that outlines the steps and different scenarios. It's also unclear to me what the user will need (or have the option) to supply.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nothing here will touch user space. This is only used within laplace

solver,
tolerance,
max_num_steps,
laplace_line_search_options{max_steps_line_search},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I read this correctly, the user interface has not changed? So the user has not control over the wolfe linesearch and this is all handled internally?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user has no control over the line search. I think adding all of those parameters would make the signature crazy looking and imo idt users need to worry about it as long as we have decent default settings.

expect_near_rel(msg, ll_laplace_val, std::log(piece), rel_tol,
"laplace_val", "integrated_val");
} catch (const std::domain_error& e) {
// NOTE: Failures for integration our fine since we are testing laplace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"our" --> "are"

}

// Tests stay focused on the strong-Wolfe variant because the API does not yet
// expose a weak-Wolfe configuration.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok... I lost track of the difference between the strong-Wolfe and weak-Wolfe configuration. Can you give me a reminder.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See page 7 from here

image

You have to flip the signs since we are doing ascent instead of descent. Both use the same Armijo condition (i), but the curve check is different. The absolute values on the strong wolfe condition stop you from overshooting, where weak only checks if the slope direction (for descent) is less than the start. But that does mean you are allowed to overshoot in the weak wolfe.

If we use weak wolfe we would overstep far sometimes and have to backtrack in the opposite direction causing jumping around to happen. In my head it feels like that would be more work / iterations through I did not test that.

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.29 0.28 1.05 5.1% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.06 5.26% faster
gp_regr/gen_gp_data.stan 0.02 0.02 1.06 5.53% faster
gp_regr/gp_regr.stan 0.09 0.09 1.03 3.2% faster
sir/sir.stan 69.4 65.99 1.05 4.91% faster
irt_2pl/irt_2pl.stan 4.16 3.96 1.05 4.97% faster
eight_schools/eight_schools.stan 0.06 0.05 1.06 5.47% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.25 0.24 1.05 4.65% faster
pkpd/one_comp_mm_elim_abs.stan 19.57 18.65 1.05 4.7% faster
garch/garch.stan 0.41 0.4 1.02 1.75% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.66 2.62 1.01 1.39% faster
arK/arK.stan 1.74 1.72 1.01 1.44% faster
gp_pois_regr/gp_pois_regr.stan 2.94 2.74 1.07 6.76% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 8.82 8.36 1.06 5.29% faster
performance.compilation 178.94 178.61 1.0 0.19% faster
Mean result: 1.0424922541231578

Jenkins Console Log
Blue Ocean
Commit hash: 6c17a9d27ed815deba4bbbed2bb3f6f637fac2b0


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS; IBPB conditional; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

@SteveBronder
Copy link
Collaborator Author

I guess that speaks to my last point about having performance tests.

Yes I'll throw in a little json logger on a side branch so we can get some performance numbers over all the tests. Aki's example will fail if we turn off the line search. So I think we should leave it on and allow it to try a few iterations. But yeah we can hop on a call and discuss the line search strategy.

@SteveBronder
Copy link
Collaborator Author

If we fail to converge after N iterations should we throw a hard error or return back what we have so far with a warning?

We should reject the current metropolis proposal.

What is the intuition with that? My thought process is that if someone asks for 1e-12 and the optimizer gets down to 1e-10, should we really throw out the whole result? I feel like just telling users "Hey we were only able to get down to x tolerance when you asked for y tolerance, the algorithm may not have converged." would suffice. That would also let users do more debugging imo

@charlesm93
Copy link
Contributor

Aki's example will fail if we turn off the line search. So I think we should leave it on and allow it to try a few iterations.

I think it's fine to have unit tests which fail with the default control parameters, as long as we get useful error messages, and we can get the unit tests to pass with non-default control parameters. The whole point of giving users control over tuning parameters is that sometimes the defaults don't cut it.

But yeah we can hop on a call and discuss the line search strategy.

Sure thing!

What is the intuition with that?

You raise a valid point and I'm open to the idea of issuing a warning message.

The argument for rejecting the proposal is that the user decides what an acceptable tolerance is for the solver---if the solver doesn't achieve that tolerance, then we might be concerned that the marginal likelihood is poorly approximated, say because the chain wondered into a pathological region of the parameter space and then it is better to backtrack.

Still, I like the idea of a warning message. It's then up to the user to check the quality of the inference directly, rather than relying on the quality of the numerical methods.

(Note that issuing a warning message would be inconsistent with what we've done with other numerical methods, like the newton_solver.)

@SteveBronder
Copy link
Collaborator Author

I wrote this branch that has a full json logger inside of it. There is a gist here with R code for making some graphs out of the json data (it's below the failure logs). NOTE: the test TEST(WriteArrayBodySimple, ExecutesBodyWithHardcodedData) can generate a json file that is 10+ GB in size. You can comment out the laplace stuff in that section if you want to.

Aki's example will fail if we turn off the line search. So I think we should leave it on and allow it to try a few iterations.

I think it's fine to have unit tests which fail with the default control parameters, as long as we get useful error messages, and we can get the unit tests to pass with non-default control parameters. The whole point of giving users control over tuning parameters is that sometimes the defaults don't cut it.

Sorry, by fail I mean the gradients and value that we return are completely wrong for the roach data without the wolfe line search. For example here is the output below that shows this when line search is set to zero.

./test/unit/math/expect_near_rel.hpp:36: Failure
The difference between x1 and x2 is 731.29171829436268, which exceeds tol_val, where
x1 evaluates to -741.32265363802378,
x2 evaluates to -10.03093534366114, and
tol_val evaluates to 187.83839724542122.
expect_near_rel_finite in: for (i) = (261), laplace and integrated results should be close for laplace_val vs integrated_val

./test/unit/math/expect_near_rel.hpp:36: Failure
The difference between x1 and x2 is 9.2560165465474817, which exceeds tol_val, where
x1 evaluates to -13.534972738479031,
x2 evaluates to -4.2789561919315489, and
tol_val evaluates to 4.4534822326026449.
expect_near_rel_finite in: for (i) = (262), laplace and integrated results should be close for laplace_val vs integrated_val

./test/unit/math/expect_near_rel.hpp:36: Failure
The difference between x1 and x2 is 6329.2815766619124, which exceeds tol_val, where
x1 evaluates to -7204.8762038114164,
x2 evaluates to -875.59462714950428, and
tol_val evaluates to 121.20706246441381.
expect_near_rel_finite in: sum laplace vs integrated sum for laplace_sum vs integrated_sum

I'm going to think for a little bit to see if we can't find a happy medium that does a little line search.

This graph shows the number of evaluations inside of the wolfe line search for the roach data over multiple runs of the data. Each line is a separate run.

image

So we need a bunch of evaluations in the beginning, middle, and end. Maybe I can just revert back to trying a full newton step, then if we pass the Wolfe conditions with one full step we keep going and otherwise we fallback to the wolfe line search.

I tried being too cutesy with this graph, but this is the initial stepsize and final stepsize for each of the laplace iterations over all of the runs of the roach data.

image

While there is too much going on here, you can see that we need to take a teeny tiny first step, but then after that we can get away with step sizes that are pretty big!

A few other graphs and notes

This shows the amount of time spent doing the wolfe line search for each of the tests in laplace_marginal_lpdf_test.cpp

image

This graph shows the runtime for each run of the tests given we either do a full newton step or use the line search.

image

So sometimes Wolfe is not that much worse. However, in the same graph below for the motorcycle data wolfe is way way slower!

image

But a full newton step here also fails the AD test suite by about 3e-5. So I think there is something we can do where, if a full newton step seems hairy we can fallback to wolfe, but otherwise just accept the full newton step and keep going. My intuition was that I thought the gradient for theta would be way cheaper than the diagonal hessian calculation. So in my mind it made sense to do more wolfe steps and get a better step relative to running the outside loop. But that seems to not always be the case so I need to rethink the search strategy.

This is what the arrow graph above is supposed to look like and I think it is more clear what is going on relative to the one above. At iteration 0 we start with a step size of 1 and end up at a step size of 2. Then at iteration 1 we jump back down from a stepsize of 2 to a stepsize of 1.

image

I need to go into the AD testing framework and add logging for if a test passes or fails with newton or wolfe (and by how much). It requires diving into the code for the general AD test suite we use so I didn't get around to it yet.

What is the intuition with that?

You raise a valid point and I'm open to the idea of issuing a warning message.

The argument for rejecting the proposal is that the user decides what an acceptable tolerance is for the solver---if the solver doesn't achieve that tolerance, then we might be concerned that the marginal likelihood is poorly approximated, say because the chain wondered into a pathological region of the parameter space and then it is better to backtrack.

Still, I like the idea of a warning message. It's then up to the user to check the quality of the inference directly, rather than relying on the quality of the numerical methods.

(Note that issuing a warning message would be inconsistent with what we've done with other numerical methods, like the newton_solver.)

Agree it would be nonstandard relative to our other solver. Though for LBFGS if we go through line search without hitting the tolerances we still report back values. If we can craft a nice and clear message for this then I think it would be nice to issue a warning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants