In cognitive neuroscience, computational modeling can formally adjudicate between theories and affords quantitative fits to behavioral/brain data. Pragmatically, however, the space of plausible generative models considered is dramatically limited by the set of models with known likelihood functions. For many models, the lack of a closed-form likelihood typically impedes Bayesian inference methods. As a result, standard models are evaluated for convenience, even when other models might be superior. Likelihood-free methods exist but are limited by their computational cost or their restriction to particular inference scenarios. Here, we propose neural networks that learn approximate likelihoods for arbitrary generative models, allowing fast posterior sampling with only a one-off cost for model simulations that is amortized for future inference. We show that these methods can accurately recover posterior parameter distributions for a variety of neurocognitive process models. We provide code allowing users to deploy these methods for arbitrary hierarchical model instantiations without further training.
All code is provided freely and is available at the following links: https://github.com/lnccbrown/lans/tree/master/hddmnn-tutorial, https://github.com/lnccbrown/lans/tree/master/al-mlp and https://github.com/lnccbrown/lans/tree/master/al-cnn.
- Michael J Frank
- Michael J Frank
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
- Valentin Wyart, École normale supérieure, PSL University, INSERM, France
- Received: November 21, 2020
- Accepted: April 1, 2021
- Accepted Manuscript published: April 6, 2021 (version 1)
© 2021, Fengler et al.
This article is distributed under the terms of the Creative Commons Attribution License permitting unrestricted use and redistribution provided that the original author and source are credited.