A Blog Entry on Bayesian Computation by an Applied Mathematician
$$
$$
PDMP (Piecewise Deterministic Markov Process) constitutes the third class of Markov Processes, together with Markov chain and Diffusion Process, that are utilized as Monte Carlo methods to simulate complex probability distributions. Recently PDMP has gathered substantial attention mainly due to its potential of faster convergence and scalability to large data sets, which renders PDMP as a promising alternative to MCMC, particularly in modern ‘big data’ settings in statistics and machine learning.
This work identifies the third advantage of PDMP. We show its ability to unbiasedly sample from a certain class of non-absolutely continuous probability distributions, for instance, those having finite delta parts. We prove convergence results in the regime where the spike width goes to zero. In such a regime, MCMC methods fail to detect the spikes and result in sampling from incorrect distribution, while PDMP methods stay applicable and unbiased.
Bayesian variable selection using spike-and-slab prior is our application setting. We compare its efficiency with other unbiased methods.
Bayesian variable selection, PDMP, MCMC, Bayesian inference