-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: Guard Minuit optimizer against provided strategy of None #2278
Conversation
* As the iminuit.Minuit.strategy is an integer, instead of having to have a human determine the integer representation of the inverse of the truthiness of do_grad during debugging or inspection do this in code.
@@ -173,27 +173,35 @@ def test_minuit_strategy_do_grad(mocker, backend): | |||
assert spy.spy_return.minuit.strategy == 1 | |||
|
|||
|
|||
@pytest.mark.parametrize('strategy', [0, 1]) | |||
@pytest.mark.parametrize('strategy', [0, 1, 2]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding 2
as we can and I forgot it existed.
strategy (:obj:`int`): See :attr:`iminuit.Minuit.strategy`. | ||
Default is ``None``, which results in either | ||
:attr:`iminuit.Minuit.strategy` ``0`` or ``1`` from the evaluation of | ||
``int(not pyhf.tensorlib.default_do_grad)``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #2278 +/- ##
=======================================
Coverage 98.30% 98.30%
=======================================
Files 69 69
Lines 4534 4536 +2
Branches 801 802 +1
=======================================
+ Hits 4457 4459 +2
Misses 45 45
Partials 32 32
Flags with carried forward coverage won't be shown. Click here to find out more.
☔ View full report in Codecov by Sentry. |
Once this is approved I'll make patch release |
src/pyhf/optimize/opt_minuit.py
Outdated
# Guard against None from either self.strategy defaulting to None or | ||
# passing strategy=None as options kwarg | ||
if strategy is None: | ||
strategy = int(not do_grad) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the iminuit.Minuit.strategy
is an integer, instead of having to have a human determine the integer representation of the inverse of the truthiness of do_grad
during debugging or inspection do this in code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're aiming for maximum readability, I like strategy = 0 if do_grad else 1
best.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me, thanks! Just a tiny comment on what I believe is slightly more readable, but might be preference.
As a heads-up: there will also be a strategy=3 arriving eventually: root-project/root#13109 (more expensive and stable).
Description
Guard Minuit optimizer strategy from None by first attempting to get a not-None result from the options kwargs and then falling back to assigning the strategy through
int(not do_grad)
. In the process this simplifies the logic required and supersedes PR #2277.Add tests to
test_minuit_strategy_global
forstrategy=None
andstrategy=2
(2
is a validiminuit.Minuit.strategy
strategy but not used often, so done just for good measure).Add a very brief explanation to docs of what using
strategy=None
means in terms of evaluation to 0 or 1 fromint(pyhf.tensorlib.default_do_grad)
.This extended example from @alexander-held in Issue #1785 now runs with this PR:
Checklist Before Requesting Reviewer
Before Merging
For the PR Assignees: