Authors:
Year:
Target: Neural Network: Samples: Sequential:

Fast epsilon-free Inference of Simulation Models with Bayesian Conditional Density Estimation

May 20, 2016 George Papamakarios, Iain Murray
Target: posterior Neural Network: mixture-density Samples: direct Sequential: yes

Flexible statistical inference for mechanistic models of neural dynamics

November 6, 2017 Jan-Matthis Lueckmann, Pedro J. Gonçalves, Giacomo Bassetto, Kaan Öcal, Marcel Nonnenmacher, Jakob H. Macke
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: yes

Hierarchical Implicit Models and Likelihood-Free Variational Inference

November, 2017 Dustin Tran, Rajesh Ranganath, David M. Blei
Target: posterior Neural Network: generative-NN Samples: direct Sequential: no
  • Trains a generative network for inferring the posterior in hierarchical models.
  • For non-hierarchical models, the approach is very similar to GATSBI (Ramesh et al., 2021)

Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows

May 18, 2018 George Papamakarios, David C. Sterratt, Iain Murray
Target: likelihood Neural Network: normalizing-flows Samples: MCMC Sequential: yes

Likelihood-free inference with emulator networks

May 23, 2018 Jan-Matthis Lueckmann, Giacomo Bassetto, Theofanis Karaletsos, Jakob H. Macke
Target: likelihood Neural Network: deep-ensembles Samples: MCMC Sequential: yes

Analyzing Inverse Problems with Invertible Neural Networks

February, 2019 Lynton Ardizzone, Jakob Kruse, Sebastian Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen, Lena Maier-Hein, Carsten Rother, Ullrich Köthe
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: no
  • Different way to train a normalizing flow for amortized posterior inference with respect to Radev et al. 2021.

Likelihood-free MCMC with Amortized Approximate Ratio Estimators

March 10, 2019 Joeri Hermans, Volodimir Begy, Gilles Louppe
Target: likelihood-ratio Neural Network: classifier-NN Samples: MCMC Sequential: no

Automatic posterior transformation for likelihood-free inference

May 17, 2019 David S. Greenberg, Marcel Nonnenmacher, Jakob H. Macke
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: yes

Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems

October, 2019 Geoffrey Roeder, Paul K. Grant, Andrew Phillips, Neil Dalchau, Edward Meeds
Target: posterior Neural Network: autoencoders Samples: direct Sequential: no
  • Code
  • Use variational inference to train autoencoders for posterior inference.

The frontier of simulation-based inference

November 4, 2019 Kyle Cranmer, Johann Brehmer, Gilles Louppe
Review paper
  • They discuss recent advaces in LFI, including NN-based methods.

On Contrastive Learning for Likelihood-free Inference

February 10, 2020 Conor Durkan, Iain Murray, George Papamakarios
Target: likelihood-ratio, posterior Neural Network: classifier-NN, normalizing-flows Samples: MCMC, direct Sequential: yes
  • Code
  • They unify the ratio estimation approach by Hermans et al. (2019) with the sequential neural posterior formulation of Greenberg et al. (2019).

BayesFlow: Learning complex stochastic models with invertible neural networks

March 13, 2020 Stefan T. Radev, Ulf K. Mertens, Andreas Voss, Lynton Ardizzone, Ullrich Köthe
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: no

Distortion estimates for approximate Bayesian inference

June 19, 2020 Hanwen Xing, Geoff Nicholls, Jeong (Kate) Lee
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: no
  • They use normalizing flows to build distortion maps which acts on univariate marginals of the approximate posterior to move them closer to the exact posterior, without evaluation of the latter, for a specific observation.
  • Can be applied to any posterior approximation, provided simulations from the model are doable.

SBI – A toolkit for simulation-based inference

July 17, 2020 Alvaro Tejero-Cantero, Jan Boelts, Michael Deistler, Jan-Matthis Lueckmann, Conor Durkan, Pedro J. Gonçalves, David S. Greenberg, Jakob H. Macke
Python package
  • Code
  • A Python toolbox for simulation-based inference methods based on NNs.
  • Documentation at this webpage

Error-Guided Likelihood-Free MCMC

October 13, 2020 Volodimir Begy, Erich Schikuta
Target: likelihood-ratio Neural Network: classifier-NN Samples: MCMC Sequential: no
  • Enables parameter inference for observations that are a given distance away from the true observation
  • Use distance of 0 for vanilla inference
  • Handy in applications with high-dimensional observations

Simulation-efficient marginal posterior estimation with swyft: stop wasting your precious time

November 27, 2020 Benjamin Kurt Miller, Alex Cole, Gilles Louppe, Christoph Weniger
Target: likelihood-ratio Neural Network: any-NN Samples: MCMC, rejection, importance Sequential: yes
  • Explored truncation and estimating marginals as an options to increase simulation-effiency
  • Expanded upon in Truncated Marginal Neural Ratio Estimation
  • This version of swyft has the code code

Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience

December 2, 2020 Alexander Fengler, Lakshmi N Govindarajan, Tony Chen, Michael J Frank
Target: likelihood Neural Network: any-NN Samples: MCMC Sequential: no
  • Code
  • They use NNs to directly approximate likelihood estimates obtained by KDE or binned empirical estimates from repeated model simulations.

Score Matched Neural Exponential Families for Likelihood-Free Inference

December 20, 2020 Lorenzo Pacchiardi, Ritabrata Dutta
Target: likelihood Neural Network: any-NN Samples: MCMC Sequential: no

Benchmarking Simulation-Based Inference

January 12, 2021 Jan-Matthis Lueckmann, Jan Boelts, David S. Greenberg, Pedro J. Gonçalves, Jakob H. Macke
Benchmark repository
  • Code
  • Benchmark repository for LFI methods.
  • They test the performance of LFI methods (including NN-based ones) on a variety of models and data sets.
  • Interactive results at this webpage

Sequential Neural Posterior and Likelihood Approximation

February 12, 2021 Samuel Wiqvist, Jes Frellsen, Umberto Picchini
Target: likelihood, posterior Neural Network: normalizing-flows Samples: direct Sequential: yes
  • Code
  • They use normalizing flows for both the posterior and the likelihood.
  • They also optionally use an additional NN for learning summary statistics at the same time.

Robust and integrative Bayesian neural networks for likelihood-free parameter inference

February 12, 2021 Fredrik Wrede, Robin Eriksson, Richard Jiang, Linda Petzold, Stefan Engblom, Andreas Hellander, Prashant Singh
Target: posterior Neural Network: Bayesian-NN Samples: MCMC Sequential: yes
  • They bin the parameter values and use a classifier to predict the exact parameter value from data
  • The classifier is a BNN and is trained on a set of parameter-data pairs
  • When a new observation comes, the posterior is estimated by MCMC on the posterior weights and combining the set of predictions of the classifier into a (discretized) posterior distribution.

Amortized Bayesian model comparison with evidential deep learning

March, 2021 Stefan T. Radev, Marco D'Alessandro, Ulf K. Mertens, Andreas Voss, Ullrich Köthe, Paul-Christian Bürkner
Target: model-evidence Neural Network: classifier-NN Samples: - Sequential: no
  • Code
  • Way to perform model selection using a NN to estimate the model evidence
  • Amortized over the different models, so that they do not need to be fit to the data independently

MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood Inference from Sampled Trajectories

June 3, 2021 Giulio Isacchini, Natanael Spisak, Armita Nourmohammad, Thierry Mora, Aleksandra M. Walczak
Target: likelihood-ratio Neural Network: any-NN Samples: MCMC Sequential: no

Truncated Marginal Neural Ratio Estimation

July 2, 2021 Benjamin Kurt Miller, Alex Cole, Patrick Forré, Gilles Louppe, Christoph Weniger
Target: likelihood-ratio Neural Network: any-NN Samples: MCMC, rejection, importance Sequential: yes
  • Introduced a truncation scheme which can be tested for calibration to the prior, despite being sequential. Other sequential methods cannot be tested for calibration.
  • Introduced a technique of estimating necessary partially marginalized posterior distributions to increase simulation efficiency.
  • Code
  • Also see general implementation

GATSBI: Generative Adversarial Training for Simulation-Based Inference

September 29, 2021 Poornima Ramesh, Jan-Matthis Lueckmann, Jan Boelts, Álvaro Tejero-Cantero, David S. Greenberg, Pedro J. Gonçalves, Jakob H. Macke
Target: posterior Neural Network: generative-NN Samples: direct Sequential: yes
  • Uses Generative Adversarial Networks to represent the posterior.
  • Code

Variational methods for simulation-based inference

September 29, 2021 Manuel Glöckler, Michael Deistler, Jakob H. Macke
Target: likelihood, posterior Neural Network: normalizing-flows Samples: direct Sequential: yes
  • Code
  • They consider the standard likelihood or likelihood-ratio approaches and replace MCMC with a variational inference done via normalizing flows
  • In this way you get benefits of targeting likelihood with no need of MCMC
  • Related to Wiqvist et al. (2021) above.

Arbitrary Marginal Neural Ratio Estimation for Simulation-based Inference

October 1, 2021 François Rozet, Gilles Louppe
Target: likelihood-ratio Neural Network: classifier-NN Samples: MCMC Sequential: no
  • Extension of Hermans et al. 2019 (above) which allows to explicitly obtain every possible marginal
  • Thanks to this you can plot marginals with no need of posterior sampling
  • Code

Variational Likelihood-Free Gradient Descent

November 22, 2021 Jack Simons, Song Liu, Mark Beaumont
Target: likelihood-ratio Neural Network: classifier-NN Samples: Stein Variational Gradient Descent Sequential: yes
  • Considers a set of particles which will be a posterior approximation
  • The particles are fitted to the posterior by alternating between NN training and Stein Variaional Gradient Descent steps

Group equivariant neural posterior estimation

November 25, 2021 Maximilian Dax, Stephen R. Green, Jonathan Gair, Michael Deistler, Bernhard Schölkopf, Jakob H. Macke
Target: posterior Neural Network: normalizing-flows Samples: MCMC Sequential: no
  • Extension of the Neural Posterior Estimation approach with normalizing flows to incorporate equivariant transformations.
  • The methods works without changing the NN architecture, instead transforming the data via another transformation before feeding it into the NN. The transformation is also parametrized by a NN.
  • It defines a joint posterior on parameters and transformation and then obtain posterior samples via Gibbs sampling on the two conditionals (each conditional sampling steps do not require MCMC, as normalizing flows are used for the conditional on parameters and blurring via a kernel is used for the transformation conditional).

Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks

December 16, 2021 Marvin Schmitt, Paul-Christian Bürkner, Ullrich Köthe, Stefan T. Radev
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: no
  • Code
  • They add probabilistic structure to learned summary statistics to detect model misspecification with MMD, e.g. for APT/SNPE-C (Greenberg et al., 2019) or BayesFlow (Radev et al., 2020).

HNPE: Leveraging Global Parameters for Neural Posterior Estimation

December, 2021 Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: yes
  • Code
  • They extend LFI methods using normalizing flows for Bayesian hierarchical models.

Conditional Simulation Using Diffusion Schrödinger Bridges

February 27, 2022 Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
Target: posterior Neural Network: score-network Samples: direct Sequential: no
  • They use denoising diffusion models on an extended state space which allows for Likelihood-Free Bayesian Inference

Likelihood-Free Inference with Generative Neural Networks via Scoring Rule Minimization

May 31, 2022 Lorenzo Pacchiardi, Ritabrata Dutta
Target: posterior Neural Network: generative-NN Samples: direct Sequential: yes
  • Code
  • Same setup as Ramesh et al., but training the generative network via scoring rule minimization rather than adversarial training.

swyft: Truncated Marginal Neural Ratio Estimation in Python

July 19, 2022 Benjamin Kurt Miller, Alex Cole, Christoph Weniger, Francesco Nattino, Ou Ku, Meiert W Grootes
Target: likelihood-ratio Neural Network: any-NN Samples: MCMC, rejection, importance Sequential: yes
  • Created a flexible and general purpose simulation-based inference package which uses Truncated Marginal Neural Ratio Estimation.
  • Toolbox

Adversarial Bayesian Simulation

August 25, 2022 Yuexi Wang, Veronika Rockova
Target: posterior Neural Network: generative-NN Samples: direct Sequential: yes
  • Uses Generative Adversarial Networks to represent the posterior.
  • Similar setup to Ramesh et al. 2022, but uses Wasserstein GANs instead of vanilla GANs and provides more theoretical insights.

Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation

August 29, 2022 Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, Gilles Louppe
Target: likelihood-ratio Neural Network: classifier-NN Samples: MCMC Sequential: no
  • Code
  • Builds on the likelihood-ratio approach by Hermans et al. 2019 introducing an additional loss term which lead to conservative (instead of overconfident) posterior estimates with small amount of data hence avoiding false conclusions. The learned target converges to the exact posterior as the data size increases.

Investigating the Impact of Model Misspecification in Neural Simulation-based Inference

September 5, 2022 Patrick Cannon, Daniel Ward, Sebastian M. Schmon
Target: likelihood, posterior, likelihood-ratio Neural Network: normalizing-flows Samples: MCMC, direct Sequential: yes
  • They empirically test the performance of neural likelihood-free inference methods with misspecified models, finding that performance is not robust
  • They investigate mitigation strategies such as ensemble posteriors and sharpness-aware minimization

Score Modeling for Simulation-based Inference

September 28, 2022 Tomas Geffner, George Papamakarios, Andriy Mnih
Target: posterior Neural Network: score-network Samples: direct Sequential: no
  • They use diffusion score models to approximate the posterior distribution
  • The network is trained on pairs of parameter and simulations, but it allows to sample from the posterior for an arbitrary number of simulations

New Machine Learning Techniques for Simulation-Based Inference: InferoStatic Nets, Kernel Score Estimation, and Kernel Likelihood Ratio Estimation

October 4, 2022 Kyoungchul Kong, Konstantin T. Matchev, Stephen Mrenna, Prasanth Shyamsundar
Target: likelihood-ratio, score Neural Network: ?? Samples: ?? Sequential: ??
  • They introduce three different strategies, based on kernels and a scalar function called the inferostatic potential.

Truncated proposals for scalable and hassle-free simulation-based inference

October 10, 2022 Michael Deistler, Pedro J. Goncalves, Jakob H. Macke
Target: posterior Neural Network: normalizing-flows Samples: direct Sequential: yes
  • Performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches.
  • Code

Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models

October 10, 2022 Louis Sharrock, Jack Simons, Song Liu, Mark Beaumont
Target: likelihood, posterior Neural Network: score-network Samples: direct, MCMC Sequential: yes
  • They introduce sequential algorithms to learn both the likelihood and the posterior using score-based networks.

Robust Neural Posterior Estimation and Statistical Model Criticism

October 12, 2022 Daniel Ward, Patrick Cannon, Mark Beaumont, Matteo Fasiolo, Sebastian M. Schmon
Target: posterior Neural Network: normalizing-flows Samples: MCMC Sequential: no
  • They show Neural Posterior Estimation (NPE) performs poorly under model misspecification.
  • As a remedy, they develop a framework incluyding a model criticism and a robust inference components, by explicitly modelling the discrepancy between simulations and observations.
  • Their framework requires an MCMC.
  • Code

Fast Estimation of Bayesian State Space Models Using Amortized Simulation-Based Inference

October 13, 2022 Ramis Khabibullin, Sergei Seleznev
Target: posterior Neural Network: any-NN Samples: direct Sequential: no
  • Concentrates on the setting of Bayesian state-space models, by learning some marginals of the posterior directly
  • They use a normal approximation and learn the mean and standard deviation.

Maximum Likelihood Learning of Energy-Based Models for Simulation-Based Inference

October 26, 2022 Pierre Glaser, Michael Arbel, Arnaud Doucet, Arthur Gretton
Target: likelihood Neural Network: any-NN Samples: MCMC Sequential: yes
  • They exploit energy-based models to learn likelihood, in both an amortized and sequential framework
  • Their approach is similar to Pacchiardi and Dutta 2022, but they rely on an advanced Maximum Likelihood approach to tune the energy-based model.
  • They are also able to exploit standard MCMC for sampling from the amortized version (in contrast to MCMC for doubly-intractable targets needed by Pacchiard and Dutta, 2022, and by the sequential approach).
  • Code

Contrastive Neural Ratio Estimation

November 27, 2022 Benjamin Kurt Miller, Christoph Weniger, Patrick Forré
Target: likelihood-ratio Neural Network: any-NN Samples: MCMC Sequential: no
  • Generalization of Likelihood-free MCMC with Amortized Approximate Ratio Estimators and On Contrastive Learning for Likelihood-free Inference.
  • Multi-class ratio estimation but has returns exact likelihood-to-evidence ratio at optimum
  • Code