PDMPFlux.jl
Package for the New Era of MCMC新時代の MCMC 環境に向けて:PDMPFlux.jl
10/29/2024
A Blog Entry on Bayesian Computation by an Applied Mathematician
$$
$$
PDMPFlux.jl
: package for MCMC sampling.
Instead of Markov Chains, PDMPFlux.jl
uses:
Currently, most PDMPs move piecewise linearly.
PDMP | Diffusion | |
---|---|---|
Diffuse? | No | Yes |
Jump? | Yes | No |
Driving noise | Poisson | Gauss |
Plot |
using PDMPFlux
function U_Cauchy(x::Vector)
return log(1 + x.^2)
end
dim = 10
sampler = ZigZagAD(dim, U_Cauchy) # Instantiate a sampler
Inputs: Dimension d, and any U that satisfies
p(x) \,\propto\,\exp\left\{ -U(x) \right\},\qquad x\in\mathbb{R}^d.
N_sk, N, xinit, vinit = 1_000_000, 1_000_000, zeros(dim), ones(dim) # Hyperparameters
samples = sample(sampler, N_sk, N, xinit, vinit, seed=2024)
Output: N samples from p\,\propto\,e^{-U}.
Function sample
is a wrapper of:
traj = sample_skeleton(sampler, N_sk, xinit, vinit) # simulate skeleton points
samples = sample_from_skeleton(sampler, N, traj) # get samples from the skeleton points
traj
contains a list \{x_i\}_{i=1}^{N_{sk}} of orange points
We see acceptance rate is a bit low due to the long tails of p.
Reversibility (a.k.a detailed balance): p(x)q(x|y)=p(y)q(y|x). In words: \text{Probability}[\text{Going}\;x\to y]=\text{Probability}[\text{Going}\;y\to x]. Harder to explore the entire space
Slow mixing of MH
q^{(+1)}: Only propose \rightarrow moves
q^{(-1)}: Only propose \leftarrow moves
Once going uphill, it continues to go uphill.
This is irreversible, since
\begin{align*} &\text{Probability}[x\to y]\\ &\qquad\ne\text{Probability}[y\to x]. \end{align*}
‘Limiting case of lifted MH’ means that we only simulate where we should flip the momentum \sigma\in\{\pm1\} in Lifted MH.
(1d Zig Zag sampler Bierkens et al., 2019)
Input: Gradient \nabla\log p of log target density p
For n\in\{1,2,\cdots,N\}:
Using PDMPFlux.jl
, I want to fit a large Bayesian model, showcasing the usefulness of PDMP samplers.