Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: pm.Beta with certain alpha=beta=X values causes invalid log errors #7678

Closed
arthur-st opened this issue Feb 7, 2025 · 4 comments
Closed
Labels

Comments

@arthur-st
Copy link

Describe the issue:

I'm not 100% sure that this is a bug, I might be overlooking something on the math side. That said, I find it odd that the following example works with uniform_prior variable values of 1, 10, 1000, and 10000, but not with the value of 100.

Reproduceable code example:

import pymc as pm
uniform_prior=100
with pm.Model() as model:
    p = pm.Beta("p", alpha=uniform_prior, beta=uniform_prior, shape=2)
    y = pm.Binomial("y", n=[11618, 13382], p=p, observed=[111, 148], shape=2)
    model.debug(verbose=True)
    trace = pm.sample(draws=10000)

Error message:

/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pytensor/tensor/elemwise.py:735: RuntimeWarning: invalid value encountered in log
  variables = ufunc(*ufunc_args, **ufunc_kwargs)
point={'p_logodds__': array([nan, nan])}

The variable p has the following parameters:
0: [100] [id A] <Vector(int8, shape=(1,))>
1: [100] [id A] <Vector(int8, shape=(1,))>
The parameters evaluate to:
0: [100]
1: [100]
Some of the values of variable p are associated with a non-finite logp:
 value = nan -> logp = nan
 value = nan -> logp = nan

The variable y has the following parameters:
0: [11618 13382] [id A] <Vector(int64, shape=(2,))>
1: Sigmoid [id B] <Vector(float64, shape=(2,))> 'p'
 └─ p_logodds__ [id C] <Vector(float64, shape=(2,))>
Initializing NUTS using jitter+adapt_diag...
The parameters evaluate to:
0: [11618 13382]
1: [nan nan]
This does not respect one of the following constraints: n >= 0, 0 <= p <= 1

n >= 0, 0 <= p <= 1
Apply node that caused the error: Check{n >= 0, 0 <= p <= 1}(Composite{...}.2, All{axes=None}.0)
Toposort index: 5
Inputs types: [TensorType(float64, shape=(2,)), TensorType(bool, shape=())]
Inputs shapes: [(2,), ()]
Inputs strides: [(8,), ()]
Inputs values: [array([nan, nan]), array(False)]
Outputs clients: [[output[0](y_logprob)]]

Backtrace when the node is created (use PyTensor flag traceback__limit=N to make it longer):
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/logprob/basic.py", line 595, in transformed_conditional_logp
    temp_logp_terms = conditional_logp(
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/logprob/basic.py", line 529, in conditional_logp
    node_logprobs = _logprob(
  File "/opt/homebrew/Cellar/[email protected]/3.13.2/Frameworks/Python.framework/Versions/3.13/lib/python3.13/functools.py", line 934, in wrapper
    return dispatch(args[0].__class__)(*args, **kw)
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/distributions/distribution.py", line 140, in logp
    return class_logp(value, *dist_params)
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/distributions/discrete.py", line 147, in logp
    return check_parameters(
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/distributions/dist_math.py", line 74, in check_parameters
    return CheckParameterValue(msg, can_be_replaced_by_ninf)(expr, all_true_scalar)
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pytensor/graph/op.py", line 293, in __call__
    node = self.make_node(*inputs, **kwargs)
  File "/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pytensor/raise_op.py", line 97, in make_node
    [value.type()],

HINT: Use the PyTensor flag `exception_verbosity=high` for a debug print-out and storage map footprint of this Apply node.
/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pytensor/tensor/elemwise.py:735: RuntimeWarning: invalid value encountered in log
  variables = ufunc(*ufunc_args, **ufunc_kwargs)
/Users/arthur/<proj-dir>/.venv/lib/python3.13/site-packages/pytensor/tensor/elemwise.py:735: RuntimeWarning: invalid value encountered in log
  variables = ufunc(*ufunc_args, **ufunc_kwargs)
---------------------------------------------------------------------------
SamplingError                             Traceback (most recent call last)
Cell In[2], line 6
      4 y = pm.Binomial("y", n=[11618, 13382], p=p, observed=[111, 148], shape=2)
      5 model.debug(verbose=True)
----> 6 trace = pm.sample(draws=10000)

File ~/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/sampling/mcmc.py:832, in sample(draws, tune, chains, cores, random_seed, progressbar, progressbar_theme, step, var_names, nuts_sampler, initvals, init, jitter_max_retries, n_init, trace, discard_tuned_samples, compute_convergence_checks, keep_warning_stat, return_inferencedata, idata_kwargs, nuts_sampler_kwargs, callback, mp_ctx, blas_cores, model, compile_kwargs, **kwargs)
    830         [kwargs.setdefault(k, v) for k, v in nuts_kwargs.items()]
    831     with joined_blas_limiter():
--> 832         initial_points, step = init_nuts(
    833             init=init,
    834             chains=chains,
    835             n_init=n_init,
    836             model=model,
    837             random_seed=random_seed_list,
    838             progressbar=progress_bool,
    839             jitter_max_retries=jitter_max_retries,
    840             tune=tune,
    841             initvals=initvals,
    842             compile_kwargs=compile_kwargs,
    843             **kwargs,
    844         )
    845 else:
    846     # Get initial points
    847     ipfns = make_initial_point_fns_per_chain(
    848         model=model,
    849         overrides=initvals,
    850         jitter_rvs=set(),
    851         chains=chains,
    852     )

File ~/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/sampling/mcmc.py:1605, in init_nuts(init, chains, n_init, model, random_seed, progressbar, jitter_max_retries, tune, initvals, compile_kwargs, **kwargs)
   1602     q, _ = DictToArrayBijection.map(ip)
   1603     return logp_dlogp_func([q], extra_vars={})[0]
-> 1605 initial_points = _init_jitter(
   1606     model,
   1607     initvals,
   1608     seeds=random_seed_list,
   1609     jitter="jitter" in init,
   1610     jitter_max_retries=jitter_max_retries,
   1611     logp_fn=model_logp_fn,
   1612 )
   1614 apoints = [DictToArrayBijection.map(point) for point in initial_points]
   1615 apoints_data = [apoint.data for apoint in apoints]

File ~/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/sampling/mcmc.py:1486, in _init_jitter(model, initvals, seeds, jitter, jitter_max_retries, logp_fn)
   1483 if not np.isfinite(point_logp):
   1484     if i == jitter_max_retries:
   1485         # Print informative message on last attempted point
-> 1486         model.check_start_vals(point)
   1487     # Retry with a new seed
   1488     seed = rng.integers(2**30, dtype=np.int64)

File ~/<proj-dir>/.venv/lib/python3.13/site-packages/pymc/model/core.py:1769, in Model.check_start_vals(self, start, **kwargs)
   1766 initial_eval = self.point_logps(point=elem, **kwargs)
   1768 if not all(np.isfinite(v) for v in initial_eval.values()):
-> 1769     raise SamplingError(
   1770         "Initial evaluation of model at starting point failed!\n"
   1771         f"Starting values:\n{elem}\n\n"
   1772         f"Logp initial evaluation results:\n{initial_eval}\n"
   1773         "You can call `model.debug()` for more details."
   1774     )

SamplingError: Initial evaluation of model at starting point failed!
Starting values:
{'p_logodds__': array([nan, nan])}

Logp initial evaluation results:
{'p': nan, 'y': -inf}
You can call `model.debug()` for more details.

PyMC version information:

Installed pymc via uv. M1 Mac, Python 3.13.2 (main, Feb 4 2025, 14:51:09) [Clang 16.0.0 (clang-1600.0.26.6)], pymc==5.20.1, pytensor==2.27.1.

Context for the issue:

I was looking to set up some evaluations for data sampled through a TS bandit, that's how I got my initial numbers.

@arthur-st arthur-st added the bug label Feb 7, 2025
Copy link

welcome bot commented Feb 7, 2025

Welcome Banner]
🎉 Welcome to PyMC! 🎉 We're really excited to have your input into the project! 💖

If you haven't done so already, please make sure you check out our Contributing Guidelines and Code of Conduct.

@ricardoV94
Copy link
Member

Does anything change if you use 100.0 (float) instead of integer 100?

@arthur-st
Copy link
Author

arthur-st commented Feb 7, 2025

Does anything change if you use 100.0 (float) instead of integer 100?

Yes, it starts working with uniform_prior=100. float specification. As this is lab code, I'm happy to just update it to use floats for priors as a workaround. Thank you!

@ricardoV94
Copy link
Member

Sounds related to pymc-devs/pytensor#1073

Closing as it is a PyTensor issue, this is automatically linked for reference (and relevance).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants