Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
10f960e
Import `varname_leaves` etc from AbstractPPL instead
penelopeysm Sep 24, 2025
3a04643
[no ci] initial updates for InitContext
penelopeysm Sep 24, 2025
7e522a6
[no ci] More fixes
penelopeysm Sep 24, 2025
9bc58c8
[no ci] Fix pMCMC
penelopeysm Sep 24, 2025
02d1d0e
[no ci] Fix Gibbs
penelopeysm Sep 24, 2025
27b0096
[no ci] More fixes, reexport InitFrom
penelopeysm Sep 24, 2025
7f12c3e
Fix a bunch of tests; I'll let CI tell me what's still broken...
penelopeysm Sep 24, 2025
ed197f9
Remove comment
penelopeysm Sep 24, 2025
c09c2a5
Fix more tests
penelopeysm Sep 24, 2025
20f9e97
More test fixes
penelopeysm Sep 24, 2025
ba4da83
Fix more tests
penelopeysm Sep 25, 2025
4b143ad
fix GeneralizedExtremeValue numerical test
penelopeysm Sep 25, 2025
b5d82c9
fix sample method
penelopeysm Sep 25, 2025
c315993
fix ESS reproducibility
penelopeysm Sep 25, 2025
3afd807
Fix externalsampler test correctly
penelopeysm Sep 25, 2025
25c6513
Fix everything (I _think_)
penelopeysm Sep 25, 2025
d4aaa18
Add changelog
penelopeysm Sep 25, 2025
aa3cfcf
Fix remaining tests (for real this time)
penelopeysm Sep 25, 2025
c0ea6e0
Specify default chain type in Turing
penelopeysm Oct 2, 2025
b0badc2
fix DPPL revision
penelopeysm Oct 3, 2025
049e950
Fix changelog to mention unwrapped NT / Dict for initial_params
penelopeysm Oct 16, 2025
14d3c14
Remove references to islinked, set_flag, unset_flag
penelopeysm Oct 16, 2025
ae7e1e2
Merge branch 'breaking' into py/dppl-0.38
penelopeysm Oct 16, 2025
3a13c63
use `setleafcontext(::Model, ::AbstractContext)`
penelopeysm Oct 16, 2025
5ed1230
Fix for upstream removal of default_chain_type
penelopeysm Oct 16, 2025
2a585fc
Add clarifying comment for IS test
penelopeysm Oct 16, 2025
16198fa
Revert ESS test (and add some numerical accuracy checks)
penelopeysm Oct 16, 2025
89a61af
istrans -> is_transformed
penelopeysm Oct 16, 2025
6af6330
Remove `loadstate` and `resume_from`
penelopeysm Oct 16, 2025
85a25b4
Remove a Sampler test
penelopeysm Oct 16, 2025
55e465b
Paper over one crack
penelopeysm Oct 16, 2025
9c34014
fix `resume_from`
penelopeysm Oct 16, 2025
deff3fd
remove a `Sampler` test
penelopeysm Oct 16, 2025
bbbde35
Update HISTORY.md
penelopeysm Oct 18, 2025
f927308
Remove `Sampler`, remove `InferenceAlgorithm`, transfer `initialstep`…
penelopeysm Oct 21, 2025
0566edb
Fix a word in changelog
penelopeysm Oct 21, 2025
43a30a2
Improve changelog
penelopeysm Oct 21, 2025
750418a
Add PNTDist to changelog
penelopeysm Oct 22, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
53 changes: 53 additions & 0 deletions HISTORY.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,58 @@
# 0.41.0

## DynamicPPL 0.38

Turing.jl v0.41 brings with it all the underlying changes in DynamicPPL 0.38.
Please see [the DynamicPPL changelog](https://github.com/TuringLang/DynamicPPL.jl/blob/main/HISTORY.md) for full details: in this section we only describe the changes that will directly affect end-users of Turing.jl.

### Performance

A number of functions such as `returned` and `predict` will have substantially better performance in this release.

### `ProductNamedTupleDistribution`

`Distributions.ProductNamedTupleDistribution` can now be used on the right-hand side of `~` in Turing models.

### Initial parameters

**Initial parameters for MCMC sampling must now be specified in a different form.**
You still need to use the `initial_params` keyword argument to `sample`, but the allowed values are different.
For almost all samplers in Turing.jl (except `Emcee`) this should now be a `DynamicPPL.AbstractInitStrategy`.

There are three kinds of initialisation strategies provided out of the box with Turing.jl (they are exported so you can use these directly with `using Turing`):

- `InitFromPrior()`: Sample from the prior distribution. This is the default for most samplers in Turing.jl (if you don't specify `initial_params`).

- `InitFromUniform(a, b)`: Sample uniformly from `[a, b]` in linked space. This is the default for Hamiltonian samplers. If `a` and `b` are not specified it defaults to `[-2, 2]`, which preserves the behaviour in previous versions (and mimics that of Stan).
- `InitFromParams(p)`: Explicitly provide a set of initial parameters. **Note: `p` must be either a `NamedTuple` or an `AbstractDict{<:VarName}`; it can no longer be a `Vector`.** Parameters must be provided in unlinked space, even if the sampler later performs linking.

+ For this release of Turing.jl, you can also provide a `NamedTuple` or `AbstractDict{<:VarName}` and this will be automatically wrapped in `InitFromParams` for you. This is an intermediate measure for backwards compatibility, and will eventually be removed.

This change is made because Vectors are semantically ambiguous.
It is not clear which element of the vector corresponds to which variable in the model, nor is it clear whether the parameters are in linked or unlinked space.
Previously, both of these would depend on the internal structure of the VarInfo, which is an implementation detail.
In contrast, the behaviour of `AbstractDict`s and `NamedTuple`s is invariant to the ordering of variables and it is also easier for readers to understand which variable is being set to which value.

If you were previously using `varinfo[:]` to extract a vector of initial parameters, you can now use `Dict(k => varinfo[k] for k in keys(varinfo)` to extract a Dict of initial parameters.

For more details about initialisation you can also refer to [the main TuringLang docs](https://turinglang.org/docs/usage/sampling-options/#specifying-initial-parameters), and/or the [DynamicPPL API docs](https://turinglang.org/DynamicPPL.jl/stable/api/#DynamicPPL.InitFromPrior).

### `resume_from` and `loadstate`

The `resume_from` keyword argument to `sample` is now removed.
Instead of `sample(...; resume_from=chain)` you can use `sample(...; initial_state=loadstate(chain))` which is entirely equivalent.
`loadstate` is exported from Turing now instead of in DynamicPPL.

Note that `loadstate` only works for `MCMCChains.Chains`.
For FlexiChains users please consult the FlexiChains docs directly where this functionality is described in detail.

### `pointwise_logdensities`

`pointwise_logdensities(model, chn)`, `pointwise_loglikelihoods(...)`, and `pointwise_prior_logdensities(...)` now return an `MCMCChains.Chains` object if `chn` is itself an `MCMCChains.Chains` object.
The old behaviour of returning an `OrderedDict` is still available: you just need to pass `OrderedDict` as the third argument, i.e., `pointwise_logdensities(model, chn, OrderedDict)`.

## Initial step in MCMC sampling

HMC and NUTS samplers no longer take an extra single step before starting the chain.
This means that if you do not discard any samples at the start, the first sample will be the initial parameters (which may be user-provided).

Expand Down
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Optim = "429524aa-4258-5aef-a3af-852621145aeb"

[extensions]
TuringDynamicHMCExt = "DynamicHMC"
TuringOptimExt = "Optim"
TuringOptimExt = ["Optim", "AbstractPPL"]

[compat]
ADTypes = "1.9"
Expand All @@ -64,7 +64,7 @@ Distributions = "0.25.77"
DistributionsAD = "0.6"
DocStringExtensions = "0.8, 0.9"
DynamicHMC = "3.4"
DynamicPPL = "0.37.2"
DynamicPPL = "0.38"
EllipticalSliceSampling = "0.5, 1, 2"
ForwardDiff = "0.10.3, 1"
Libtask = "0.9.3"
Expand Down
10 changes: 10 additions & 0 deletions docs/src/api.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,16 @@ even though [`Prior()`](@ref) is actually defined in the `Turing.Inference` modu
| `RepeatSampler` | [`Turing.Inference.RepeatSampler`](@ref) | A sampler that runs multiple times on the same variable |
| `externalsampler` | [`Turing.Inference.externalsampler`](@ref) | Wrap an external sampler for use in Turing |

### Initialisation strategies

Turing.jl provides several strategies to initialise parameters for models.

| Exported symbol | Documentation | Description |
|:----------------- |:--------------------------------------- |:--------------------------------------------------------------- |
| `InitFromPrior` | [`DynamicPPL.InitFromPrior`](@extref) | Obtain initial parameters from the prior distribution |
| `InitFromUniform` | [`DynamicPPL.InitFromUniform`](@extref) | Obtain initial parameters by sampling uniformly in linked space |
| `InitFromParams` | [`DynamicPPL.InitFromParams`](@extref) | Manually specify (possibly a subset of) initial parameters |

### Variational inference

See the [docs of AdvancedVI.jl](https://turinglang.org/AdvancedVI.jl/stable/) for detailed usage and the [variational inference tutorial](https://turinglang.org/docs/tutorials/09-variational-inference/) for a basic walkthrough.
Expand Down
16 changes: 6 additions & 10 deletions ext/TuringDynamicHMCExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -44,26 +44,22 @@ struct DynamicNUTSState{L,V<:DynamicPPL.AbstractVarInfo,C,M,S}
stepsize::S
end

function DynamicPPL.initialsampler(::DynamicPPL.Sampler{<:DynamicNUTS})
return DynamicPPL.SampleFromUniform()
end

function DynamicPPL.initialstep(
function Turing.Inference.initialstep(
rng::Random.AbstractRNG,
model::DynamicPPL.Model,
spl::DynamicPPL.Sampler{<:DynamicNUTS},
spl::DynamicNUTS,
vi::DynamicPPL.AbstractVarInfo;
kwargs...,
)
# Ensure that initial sample is in unconstrained space.
if !DynamicPPL.islinked(vi)
if !DynamicPPL.is_transformed(vi)
vi = DynamicPPL.link!!(vi, model)
vi = last(DynamicPPL.evaluate!!(model, vi))
end

# Define log-density function.
ℓ = DynamicPPL.LogDensityFunction(
model, DynamicPPL.getlogjoint_internal, vi; adtype=spl.alg.adtype
model, DynamicPPL.getlogjoint_internal, vi; adtype=spl.adtype
)

# Perform initial step.
Expand All @@ -84,14 +80,14 @@ end
function AbstractMCMC.step(
rng::Random.AbstractRNG,
model::DynamicPPL.Model,
spl::DynamicPPL.Sampler{<:DynamicNUTS},
spl::DynamicNUTS,
state::DynamicNUTSState;
kwargs...,
)
# Compute next sample.
vi = state.vi
ℓ = state.logdensity
steps = DynamicHMC.mcmc_steps(rng, spl.alg.sampler, state.metric, ℓ, state.stepsize)
steps = DynamicHMC.mcmc_steps(rng, spl.sampler, state.metric, ℓ, state.stepsize)
Q, _ = DynamicHMC.mcmc_next_step(steps, state.cache)

# Create next sample and state.
Expand Down
3 changes: 2 additions & 1 deletion ext/TuringOptimExt.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
module TuringOptimExt

using Turing: Turing
using AbstractPPL: AbstractPPL
import Turing: DynamicPPL, NamedArrays, Accessors, Optimisation
using Optim: Optim

Expand Down Expand Up @@ -186,7 +187,7 @@ function _optimize(
f.ldf.model, f.ldf.getlogdensity, vi_optimum; adtype=f.ldf.adtype
)
vals_dict = Turing.Inference.getparams(f.ldf.model, vi_optimum)
iters = map(DynamicPPL.varname_and_value_leaves, keys(vals_dict), values(vals_dict))
iters = map(AbstractPPL.varname_and_value_leaves, keys(vals_dict), values(vals_dict))
vns_vals_iter = mapreduce(collect, vcat, iters)
varnames = map(Symbol ∘ first, vns_vals_iter)
vals = map(last, vns_vals_iter)
Expand Down
13 changes: 11 additions & 2 deletions src/Turing.jl
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,10 @@ using DynamicPPL:
conditioned,
to_submodel,
LogDensityFunction,
@addlogprob!
@addlogprob!,
InitFromPrior,
InitFromUniform,
InitFromParams
using StatsBase: predict
using OrderedCollections: OrderedDict

Expand Down Expand Up @@ -148,11 +151,17 @@ export
fix,
unfix,
OrderedDict, # OrderedCollections
# Initialisation strategies for models
InitFromPrior,
InitFromUniform,
InitFromParams,
# Point estimates - Turing.Optimisation
# The MAP and MLE exports are only needed for the Optim.jl interface.
maximum_a_posteriori,
maximum_likelihood,
MAP,
MLE
MLE,
# Chain save/resume
loadstate

end
54 changes: 20 additions & 34 deletions src/mcmc/Inference.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ using DynamicPPL:
# or implement it for other VarInfo types and export it from DPPL.
all_varnames_grouped_by_symbol,
syms,
islinked,
setindex!!,
push!!,
setlogp!!,
Expand All @@ -23,12 +22,7 @@ using DynamicPPL:
getsym,
getdist,
Model,
Sampler,
SampleFromPrior,
SampleFromUniform,
DefaultContext,
set_flag!,
unset_flag!
DefaultContext
using Distributions, Libtask, Bijectors
using DistributionsAD: VectorOfMultivariate
using LinearAlgebra
Expand All @@ -55,12 +49,9 @@ import Random
import MCMCChains
import StatsBase: predict

export InferenceAlgorithm,
Hamiltonian,
export Hamiltonian,
StaticHamiltonian,
AdaptiveHamiltonian,
SampleFromUniform,
SampleFromPrior,
MH,
ESS,
Emcee,
Expand All @@ -78,13 +69,16 @@ export InferenceAlgorithm,
RepeatSampler,
Prior,
predict,
externalsampler
externalsampler,
init_strategy,
loadstate

###############################################
# Abstract interface for inference algorithms #
###############################################
#########################################
# Generic AbstractMCMC methods dispatch #
#########################################

include("algorithm.jl")
const DEFAULT_CHAIN_TYPE = MCMCChains.Chains
include("abstractmcmc.jl")

####################
# Sampler wrappers #
Expand Down Expand Up @@ -262,13 +256,13 @@ function _params_to_array(model::DynamicPPL.Model, ts::Vector)
dicts = map(ts) do t
# In general getparams returns a dict of VarName => values. We need to also
# split it up into constituent elements using
# `DynamicPPL.varname_and_value_leaves` because otherwise MCMCChains.jl
# `AbstractPPL.varname_and_value_leaves` because otherwise MCMCChains.jl
# won't understand it.
vals = getparams(model, t)
nms_and_vs = if isempty(vals)
Tuple{VarName,Any}[]
else
iters = map(DynamicPPL.varname_and_value_leaves, keys(vals), values(vals))
iters = map(AbstractPPL.varname_and_value_leaves, keys(vals), values(vals))
mapreduce(collect, vcat, iters)
end
nms = map(first, nms_and_vs)
Expand Down Expand Up @@ -315,11 +309,10 @@ end
getlogevidence(transitions, sampler, state) = missing

# Default MCMCChains.Chains constructor.
# This is type piracy (at least for SampleFromPrior).
function AbstractMCMC.bundle_samples(
ts::Vector{<:Union{Transition,AbstractVarInfo}},
model::AbstractModel,
spl::Union{Sampler{<:InferenceAlgorithm},SampleFromPrior,RepeatSampler},
ts::Vector{<:Transition},
model::DynamicPPL.Model,
spl::AbstractSampler,
state,
chain_type::Type{MCMCChains.Chains};
save_state=false,
Expand Down Expand Up @@ -378,11 +371,10 @@ function AbstractMCMC.bundle_samples(
return sort_chain ? sort(chain) : chain
end

# This is type piracy (for SampleFromPrior).
function AbstractMCMC.bundle_samples(
ts::Vector{<:Union{Transition,AbstractVarInfo}},
model::AbstractModel,
spl::Union{Sampler{<:InferenceAlgorithm},SampleFromPrior,RepeatSampler},
ts::Vector{<:Transition},
model::DynamicPPL.Model,
spl::AbstractSampler,
state,
chain_type::Type{Vector{NamedTuple}};
kwargs...,
Expand Down Expand Up @@ -423,7 +415,7 @@ function group_varnames_by_symbol(vns)
return d
end

function save(c::MCMCChains.Chains, spl::Sampler, model, vi, samples)
function save(c::MCMCChains.Chains, spl::AbstractSampler, model, vi, samples)
nt = NamedTuple{(:sampler, :model, :vi, :samples)}((spl, model, deepcopy(vi), samples))
return setinfo(c, merge(nt, c.info))
end
Expand All @@ -442,18 +434,12 @@ include("sghmc.jl")
include("emcee.jl")
include("prior.jl")

#################################################
# Generic AbstractMCMC methods dispatch #
#################################################

include("abstractmcmc.jl")

################
# Typing tools #
################

function DynamicPPL.get_matching_type(
spl::Sampler{<:Union{PG,SMC}}, vi, ::Type{TV}
spl::Union{PG,SMC}, vi, ::Type{TV}
) where {T,N,TV<:Array{T,N}}
return Array{T,N}
end
Expand Down
Loading
Loading