Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, and public reviews.
Read more about eLife’s peer review process.Editors
- Reviewing EditorAndreea DiaconescuUniversity of Toronto, Toronto, Canada
- Senior EditorMichael FrankBrown University, Providence, United States of America
Reviewer #1 (Public review):
Summary
Behavioural adjustments to different sources of uncertainty remain a hot topic in many fields including reinforcement learning. The authors present valuable findings suggesting that human participants integrate prior beliefs with sensory evidence to improve their predictions in dynamically changing environments involving perceptual decision-making, pinpointing to hallmarks of Bayesian inference. Fitting of a reduced Bayesian model to participant choice behaviour reveals that decision-makers overestimate environmental volatility, but were reasonably accurate in terms of tracking environmental noise.
Strengths
Using a perceptual decision-making task in which participants were presented with sequences of noisy observation in environments with constant volatility and variable noise, the authors demonstrate solid evidence in favour of reduced Bayesian models that can account for participant choice behaviour when its generative parameters are fitted freely. The work nicely complements recent work demonstrating the fitting of a full Bayesian model to human reinforcement learning. The authors' approach to the fitting of the model in a principled/factorial manner that is exhaustive performs the model comparison and highlights the need for further work in evaluating the model's performance in environments outside of its generative parameters. Overall the work further highlights the utility of using perceptual decision-making for Bayesian inference questions.
Weaknesses
Although data sharing and reanalysis of data are extremely welcome, particularly considering their utility for open science, the small sample size (N= 29) of the original dataset somewhat restricts the authors' ability to show more conclusive findings when it comes to deciphering the optimal memory capacity of the fitted models. It is likely that the relatively small sample size also contributes to certain key hypotheses not being confirmed intuitively, for example, the expected negative relationship between hazard rates and log (noise). The notion that the participants rely on priors to a greater extent in low noise environments relative to high noise may also indicate that they might misattribute noise as volatility, as higher noise in the environment usually obscures the information content of outcomes, and in the case of pure random/noisy sequences, it should increase reliance to priors as new sensory evidence becomes unreliable.
Reviewer #2 (Public review):
Summary:
Meijer et al reanalyze behavioral data from a task in which people made predictions about the next in a sequence of localized sounds with the goal of understanding the computations through which people combine sensory experiences into a prior used for perception. The authors combine basic analyses of experimental data with model simulations and development and fitting of a factorial model set that includes a prominent model of change-point detection that has previously been shown to approximate Bayesian inference at a reduced computational cost and provide a good match to human prediction data (reduced Bayesian model). The authors present a number of findings, including a demonstration of key qualitative markers for Bayesian change-point detection, a tendency in humans to over-rely on recent observations, a lack of an inverse relationship between fit values of hazard rate and fit values of noise, support for a number of assumptions in the reduced Bayesian model, and a lack of evidence for reliance on memory systems beyond the extremely minimal requirements of that model.
Strengths:
The paper asks an important question and takes a number of useful steps toward answering it. In particular, the factorial model set constructed to examine a number of explicit assumptions in the models typically fit to change-point predictive inference task data was a very useful innovation, and in some cases showed clearly that assumptions in the model are necessary or at least better than the proposed alternatives. In particular, the paper develops a notion of memory capacity that allows for a continuum of models differing in their tradeoffs between computational cost and predictive precision. Another strength of the paper is that it relies on data that avoids sequential biases that can contaminate reported beliefs in more standard predictive inference tasks.
Weaknesses:
The primary weakness of the paper is that most of the definitive findings reported within it have already been reported elsewhere. That humans increase the influence of surprising outcomes indicative of change points, or to say this another way, decrease their reliance on prior information in such cases, has been fairly well established, as has the discovery that humans tend to overuse recent outcomes when making predictions. The most novel aspect of the paper, the exploration of reductions of the Bayesian ideal observer that rely on differing memory capacities, yielded results that are somewhat difficult to interpret, particularly because it is not clear that the task analyzed is diagnostic of the memory capacity term in the model, or if so, what the qualitative hallmarks of a high/low memory capacity model reduction might be.