Authors:
Year:
Target:
Neural Network:
Samples:
Sequential:
May 20, 2016
Target:
posterior
Neural Network: mixture-density
Samples: direct
Sequential: yes
Flexible statistical inference for mechanistic models of neural dynamics
November 6, 2017
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
Hierarchical Implicit Models and Likelihood-Free Variational Inference
November, 2017
Target:
posterior
Neural Network: generative-NN
Samples: direct
Sequential: no
- Trains a generative network for inferring the posterior in hierarchical models.
- For non-hierarchical models, the approach is very similar to GATSBI (Ramesh et al., 2021)
Sequential Neural Likelihood: Fast Likelihood-free Inference with Autoregressive Flows
May 18, 2018
Target:
likelihood
Neural Network: normalizing-flows
Samples: MCMC
Sequential: yes
Likelihood-free inference with emulator networks
May 23, 2018
Target:
likelihood
Neural Network: deep-ensembles
Samples: MCMC
Sequential: yes
Analyzing Inverse Problems with Invertible Neural Networks
February, 2019
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: no
- Different way to train a normalizing flow for amortized posterior inference with respect to Radev et al. 2021.
Likelihood-free MCMC with Amortized Approximate Ratio Estimators
March 10, 2019
Target:
likelihood-ratio
Neural Network: classifier-NN
Samples: MCMC
Sequential: no
Automatic posterior transformation for likelihood-free inference
May 17, 2019
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
Efficient Amortised Bayesian Inference for Hierarchical and Nonlinear Dynamical Systems
October, 2019
Target:
posterior
Neural Network: autoencoders
Samples: direct
Sequential: no
- Code
- Use variational inference to train autoencoders for posterior inference.
The frontier of simulation-based inference
November 4, 2019
Review paper
- They discuss recent advaces in LFI, including NN-based methods.
On Contrastive Learning for Likelihood-free Inference
February 10, 2020
Target:
likelihood-ratio, posterior
Neural Network: classifier-NN, normalizing-flows
Samples: MCMC, direct
Sequential: yes
- Code
- They unify the ratio estimation approach by Hermans et al. (2019) with the sequential neural posterior formulation of Greenberg et al. (2019).
BayesFlow: Learning complex stochastic models with invertible neural networks
March 13, 2020
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: no
Distortion estimates for approximate Bayesian inference
June 19, 2020
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: no
- They use normalizing flows to build distortion maps which acts on univariate marginals of the approximate posterior to move them closer to the exact posterior, without evaluation of the latter, for a specific observation.
- Can be applied to any posterior approximation, provided simulations from the model are doable.
SBI – A toolkit for simulation-based inference
July 17, 2020
Python package
- Code
- A Python toolbox for simulation-based inference methods based on NNs.
- Documentation at this webpage
Error-Guided Likelihood-Free MCMC
October 13, 2020
Target:
likelihood-ratio
Neural Network: classifier-NN
Samples: MCMC
Sequential: no
- Enables parameter inference for observations that are a given distance away from the true observation
- Use distance of 0 for vanilla inference
- Handy in applications with high-dimensional observations
Simulation-efficient marginal posterior estimation with swyft: stop wasting your precious time
November 27, 2020
Target:
likelihood-ratio
Neural Network: any-NN
Samples: MCMC, rejection, importance
Sequential: yes
- Explored truncation and estimating marginals as an options to increase simulation-effiency
- Expanded upon in Truncated Marginal Neural Ratio Estimation
- This version of swyft has the code code
Likelihood approximation networks (LANs) for fast inference of simulation models in cognitive neuroscience
December 2, 2020
Target:
likelihood
Neural Network: any-NN
Samples: MCMC
Sequential: no
- Code
- They use NNs to directly approximate likelihood estimates obtained by KDE or binned empirical estimates from repeated model simulations.
Score Matched Neural Exponential Families for Likelihood-Free Inference
December 20, 2020
Target:
likelihood
Neural Network: any-NN
Samples: MCMC
Sequential: no
Benchmarking Simulation-Based Inference
January 12, 2021
Benchmark repository
- Code
- Benchmark repository for LFI methods.
- They test the performance of LFI methods (including NN-based ones) on a variety of models and data sets.
- Interactive results at this webpage
Sequential Neural Posterior and Likelihood Approximation
February 12, 2021
Target:
likelihood, posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
- Code
- They use normalizing flows for both the posterior and the likelihood.
- They also optionally use an additional NN for learning summary statistics at the same time.
Robust and integrative Bayesian neural networks for likelihood-free parameter inference
February 12, 2021
Target:
posterior
Neural Network: Bayesian-NN
Samples: MCMC
Sequential: yes
- They bin the parameter values and use a classifier to predict the exact parameter value from data
- The classifier is a BNN and is trained on a set of parameter-data pairs
- When a new observation comes, the posterior is estimated by MCMC on the posterior weights and combining the set of predictions of the classifier into a (discretized) posterior distribution.
Amortized Bayesian model comparison with evidential deep learning
March, 2021
Target:
model-evidence
Neural Network: classifier-NN
Samples: -
Sequential: no
- Code
- Way to perform model selection using a NN to estimate the model evidence
- Amortized over the different models, so that they do not need to be fit to the data independently
MINIMALIST: Mutual INformatIon Maximization for Amortized Likelihood Inference from Sampled Trajectories
June 3, 2021
Target:
likelihood-ratio
Neural Network: any-NN
Samples: MCMC
Sequential: no
Truncated Marginal Neural Ratio Estimation
July 2, 2021
Target:
likelihood-ratio
Neural Network: any-NN
Samples: MCMC, rejection, importance
Sequential: yes
- Introduced a truncation scheme which can be tested for calibration to the prior, despite being sequential. Other sequential methods cannot be tested for calibration.
- Introduced a technique of estimating necessary partially marginalized posterior distributions to increase simulation efficiency.
- Code
- Also see general implementation
GATSBI: Generative Adversarial Training for Simulation-Based Inference
September 29, 2021
Target:
posterior
Neural Network: generative-NN
Samples: direct
Sequential: yes
- Uses Generative Adversarial Networks to represent the posterior.
- Code
Variational methods for simulation-based inference
September 29, 2021
Target:
likelihood, posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
- Code
- They consider the standard likelihood or likelihood-ratio approaches and replace MCMC with a variational inference done via normalizing flows
- In this way you get benefits of targeting likelihood with no need of MCMC
- Related to Wiqvist et al. (2021) above.
Arbitrary Marginal Neural Ratio Estimation for Simulation-based Inference
October 1, 2021
Target:
likelihood-ratio
Neural Network: classifier-NN
Samples: MCMC
Sequential: no
- Extension of Hermans et al. 2019 (above) which allows to explicitly obtain every possible marginal
- Thanks to this you can plot marginals with no need of posterior sampling
- Code
Variational Likelihood-Free Gradient Descent
November 22, 2021
Target:
likelihood-ratio
Neural Network: classifier-NN
Samples: Stein Variational Gradient Descent
Sequential: yes
- Considers a set of particles which will be a posterior approximation
- The particles are fitted to the posterior by alternating between NN training and Stein Variaional Gradient Descent steps
Group equivariant neural posterior estimation
November 25, 2021
Target:
posterior
Neural Network: normalizing-flows
Samples: MCMC
Sequential: no
- Extension of the Neural Posterior Estimation approach with normalizing flows to incorporate equivariant transformations.
- The methods works without changing the NN architecture, instead transforming the data via another transformation before feeding it into the NN. The transformation is also parametrized by a NN.
- It defines a joint posterior on parameters and transformation and then obtain posterior samples via Gibbs sampling on the two conditionals (each conditional sampling steps do not require MCMC, as normalizing flows are used for the conditional on parameters and blurring via a kernel is used for the transformation conditional).
Detecting Model Misspecification in Amortized Bayesian Inference with Neural Networks
December 16, 2021
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: no
- Code
- They add probabilistic structure to learned summary statistics to detect model misspecification with MMD, e.g. for APT/SNPE-C (Greenberg et al., 2019) or BayesFlow (Radev et al., 2020).
HNPE: Leveraging Global Parameters for Neural Posterior Estimation
December, 2021
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
- Code
- They extend LFI methods using normalizing flows for Bayesian hierarchical models.
Conditional Simulation Using Diffusion Schrödinger Bridges
February 27, 2022
Target:
posterior
Neural Network: score-network
Samples: direct
Sequential: no
- They use denoising diffusion models on an extended state space which allows for Likelihood-Free Bayesian Inference
Likelihood-Free Inference with Generative Neural Networks via Scoring Rule Minimization
May 31, 2022
Target:
posterior
Neural Network: generative-NN
Samples: direct
Sequential: yes
- Code
- Same setup as Ramesh et al., but training the generative network via scoring rule minimization rather than adversarial training.
swyft: Truncated Marginal Neural Ratio Estimation in Python
July 19, 2022
Target:
likelihood-ratio
Neural Network: any-NN
Samples: MCMC, rejection, importance
Sequential: yes
- Created a flexible and general purpose simulation-based inference package which uses Truncated Marginal Neural Ratio Estimation.
- Toolbox
Adversarial Bayesian Simulation
August 25, 2022
Target:
posterior
Neural Network: generative-NN
Samples: direct
Sequential: yes
- Uses Generative Adversarial Networks to represent the posterior.
- Similar setup to Ramesh et al. 2022, but uses Wasserstein GANs instead of vanilla GANs and provides more theoretical insights.
Towards Reliable Simulation-Based Inference with Balanced Neural Ratio Estimation
August 29, 2022
Target:
likelihood-ratio
Neural Network: classifier-NN
Samples: MCMC
Sequential: no
- Code
- Builds on the likelihood-ratio approach by Hermans et al. 2019 introducing an additional loss term which lead to conservative (instead of overconfident) posterior estimates with small amount of data hence avoiding false conclusions. The learned target converges to the exact posterior as the data size increases.
Investigating the Impact of Model Misspecification in Neural Simulation-based Inference
September 5, 2022
Target:
likelihood, posterior, likelihood-ratio
Neural Network: normalizing-flows
Samples: MCMC, direct
Sequential: yes
- They empirically test the performance of neural likelihood-free inference methods with misspecified models, finding that performance is not robust
- They investigate mitigation strategies such as ensemble posteriors and sharpness-aware minimization
Score Modeling for Simulation-based Inference
September 28, 2022
Target:
posterior
Neural Network: score-network
Samples: direct
Sequential: no
- They use diffusion score models to approximate the posterior distribution
- The network is trained on pairs of parameter and simulations, but it allows to sample from the posterior for an arbitrary number of simulations
New Machine Learning Techniques for Simulation-Based Inference: InferoStatic Nets, Kernel Score Estimation, and Kernel Likelihood Ratio Estimation
October 4, 2022
Target:
likelihood-ratio, score
Neural Network: ??
Samples: ??
Sequential: ??
- They introduce three different strategies, based on kernels and a scalar function called the inferostatic potential.
Truncated proposals for scalable and hassle-free simulation-based inference
October 10, 2022
Target:
posterior
Neural Network: normalizing-flows
Samples: direct
Sequential: yes
- Performs sequential inference with truncated proposals, sidestepping the optimisation issues of alternative approaches.
- Code
Sequential Neural Score Estimation: Likelihood-Free Inference with Conditional Score Based Diffusion Models
October 10, 2022
Target:
likelihood, posterior
Neural Network: score-network
Samples: direct, MCMC
Sequential: yes
- They introduce sequential algorithms to learn both the likelihood and the posterior using score-based networks.
Robust Neural Posterior Estimation and Statistical Model Criticism
October 12, 2022
Target:
posterior
Neural Network: normalizing-flows
Samples: MCMC
Sequential: no
- They show Neural Posterior Estimation (NPE) performs poorly under model misspecification.
- As a remedy, they develop a framework incluyding a model criticism and a robust inference components, by explicitly modelling the discrepancy between simulations and observations.
- Their framework requires an MCMC.
- Code
Fast Estimation of Bayesian State Space Models Using Amortized Simulation-Based Inference
October 13, 2022
Target:
posterior
Neural Network: any-NN
Samples: direct
Sequential: no
- Concentrates on the setting of Bayesian state-space models, by learning some marginals of the posterior directly
- They use a normal approximation and learn the mean and standard deviation.
Maximum Likelihood Learning of Energy-Based Models for Simulation-Based Inference
October 26, 2022
Target:
likelihood
Neural Network: any-NN
Samples: MCMC
Sequential: yes
- They exploit energy-based models to learn likelihood, in both an amortized and sequential framework
- Their approach is similar to Pacchiardi and Dutta 2022, but they rely on an advanced Maximum Likelihood approach to tune the energy-based model.
- They are also able to exploit standard MCMC for sampling from the amortized version (in contrast to MCMC for doubly-intractable targets needed by Pacchiard and Dutta, 2022, and by the sequential approach).
- Code
Contrastive Neural Ratio Estimation
November 27, 2022
Target:
likelihood-ratio
Neural Network: any-NN
Samples: MCMC
Sequential: no
- Generalization of Likelihood-free MCMC with Amortized Approximate Ratio Estimators and On Contrastive Learning for Likelihood-free Inference.
- Multi-class ratio estimation but has returns exact likelihood-to-evidence ratio at optimum
- Code