Symbolic Bayesian Inference by Lazy Partial Evaluation
Bayesian inference, of posterior knowledge based on prior knowledge
and observed evidence, is typically implemented by applying Bayes's
solving an equation in which the posterior multiplied by the
probability of an observation equals a joint probability.
But when we observe a value of a continuous
variable, the observation usually has probability zero, and
Bayes's theorem says only that zero times the unknown is zero.
To infer a posterior distribution from a zero-probability observation,
we turn to the statistical technique of disintegration.
The classic formulation of disintegration tells us only what constitutes a
posterior distribution, not how to compute it.
But by representing all distributions and observations as terms of our
probabilistic language, core Hakaru, we have developed the first
constructive method of computing disintegrations,
solving the problem of drawing inferences from zero-probability observations.
Our method uses a lazy partial evaluator to transform terms of core Hakaru,
and we argue its correctness by a semantics of core Hakaru in which
monadic terms denote measures.
The method, which has been implemented in a larger system,
is useful not only on its own but also in composition with sampling
and other inference methods commonly used in machine learning.
The paper is available as
US Letter PDF (291K)