Uncertainty-modulated prediction errors in cortical microcircuits

  1. Department of Physiology, University of Bern, Bern, Switzerland

Peer review process

Revised: This Reviewed Preprint has been revised by the authors in response to the previous round of peer review; the eLife assessment and the public reviews have been updated where necessary by the editors and peer reviewers.

Read more about eLife’s peer review process.

Editors

  • Reviewing Editor
    Georg Keller
    Friedrich Miescher Institute, Basel, Switzerland
  • Senior Editor
    Panayiota Poirazi
    FORTH Institute of Molecular Biology and Biotechnology, Heraklion, Greece

Reviewer #2 (Public review):

Summary:

This computational modeling study addresses the observation that variable observations are interpreted differently depending on how much uncertainty an agent expects from its environment. That is, the same mismatch between a stimulus and an expected stimulus would be less significant, and specifically would represent a smaller prediction error, in an environment with a high degree of variability than in one where observations have historically been similar to each other. The authors show that if two different classes of inhibitory interneurons, the PV and SST cells, (1) encode different aspects of a stimulus distribution and (2) act in different (divisive vs. subtractive) ways, and if (3) synaptic weights evolve in a way that causes the impact of certain inputs to balance the firing rates of the targets of those inputs, then pyramidal neurons in layer 2/3 of canonical cortical circuits can indeed encode uncertainty-modulated prediction errors. To achieve this result, SST neurons learn to represent the mean of a stimulus distribution and PV neurons its variance.

The impact of uncertainty on prediction errors in an understudied topic, and this study provides an intriguing and elegant new framework for how this impact could be achieved and what effects it could produce. The ideas here differ from past proposals about how neuronal firing represents uncertainty. The developed theory is accompanied by several predictions for future experimental testing, including the existence of different forms of coding by different subclasses of PV interneurons, which target different sets of SST interneurons (as well as pyramidal cells). The authors are able to point to some experimental observations that are at least consistent with their computational results. The simulations shown demonstrate that if we accept its assumptions, then the authors' theory works very well: SSTs learn to represent the mean of a stimulus distribution, PVs learn to estimate its variance, firing rates of other model neurons scale as they should, and the level of uncertainty automatically tunes the learning rate, so that variable observations are less impactful in a high uncertainty setting.

Strengths:

The ideas in this work are novel and elegant, and they are instantiated in a progression of simulations that demonstrate the behavior of the circuit. The framework used by the authors is biologically plausible and matches some known biological data. The results attained, as well as the assumptions that go into the theory, provide several predictions for future experimental testing. The authors have taken into account earlier review comments to revise their paper in ways that enhance its clarity.

Weaknesses:

One weakness could be that the proposed theory does rely on a fairly large number of assumptions. However, there is at least some biological support for these. Importantly, the authors do lay out and discuss their key assumptions in the Discussion section, so readers can assess their validity and implications for themselves.

Reviewer #4 (Public review):

Summary:

Wilmes and colleagues develop a model for the computation of uncertainty modulated prediction errors based on an experimentally inspired cortical circuit model for predictive processing. Predictive processing is a promising theory of cortical function. An essential aspect of the model is the idea of precision weighting of prediction errors. There is ample experimental evidence for prediction error responses in cortex. However, a central prediction of the theory is that these prediction error responses are regulated by the uncertainty of the input. Testing this idea experimentally has been difficult due to a lack of concrete models. This work provides one such model and makes experimentally testable predictions.

Strengths:

The model proposed is novel and well-implemented. It has sufficient biological accuracy to make useful and testable predictions.

Weaknesses:

One key idea the model hinges on is that stimulus uncertainty is encoded in the firing rate of parvalbumin positive interneurons. This assumption, however, is rather speculative and there is no direct evidence for this.

Author response:

The following is the authors’ response to the original reviews.

Reviewer 1 (Public Review):

Summary: Wilmes and colleagues present a computational model of a cortical circuit for predictive processing which tackles the issue of how to learn predictions when different levels of uncertainty are present for the predicted sensory stimulus. When a predicted sensory outcome is highly variable, deviations from the average expected stimulus should evoke prediction errors that have less impact on updating the prediction of the mean stimulus. In the presented model, layer 2/3 pyramidal neurons represent either positive or negative prediction errors, SST neurons mediate the subtractive comparison between prediction and sensory input, and PV neurons represent the expected variance of sensory outcomes. PVs therefore can control the learning rate by divisively inhibiting prediction error neurons such that they are activated less, and exert less influence on updating predictions, under conditions of high uncertainty.

Strengths: The presented model is a very nice solution to altering the learning rate in a modality and context-specific way according to expected uncertainty and, importantly, the model makes clear, experimentally testable predictions for interneuron and pyramidal neuron activity. This is therefore an important piece of modelling work for those working on cortical and/or predictive processing and learning. The model is largely well-grounded in what we know of the cortical circuit.

Weaknesses: Currently, the model has not been challenged with experimental data, presumably because data from an ad- equate paradigm is not yet available. I therefore only have minor comments regarding the biological plausibility of the model:

Beyond the fact that some papers show SSTs mediate subtractive inhibition and PVs mediate divisive inhibition, the selection of interneuron types for the different roles could be argued further, given existing knowledge of their properties. For instance, is a high PV baseline firing rate, or broad sensory tuning that is often interpreted as a ’pooling’ of pyramidal inputs, compatible with or predicted by the model?

Thank you for this nice suggestion. We added a section to the discussion expanding on this: “The model predicts that the divisive interneuron type, which we here suggest to be the PVs, receive a representation of the stimulus as an input. PVs could be pooling the inputs from stimulus-responsive layer 2/3 neurons to estimate uncertainty. The more the stimulus varies, the larger the variability of the pyramidal neuron responses and, hence, the variability of the PV activity. The broader sensory tuning of PVs (Cottam et al. 2013) is in line with the model insofar as uncertainty modulation could be more general than the specific feature, which is more likely for low-level features processed in primary sensory cortices. PVs were shown to connect more to pyramidal cells with similar feature-tuning (Znamenskyiy et al. 2024); this would be in line with the model, as uncertainty modulation should be feature-related. In our model, some SSTs deliver the prediction to the positive prediction error neurons. SSTs are already known to be involved in spatial prediction, as they underlie the effect of surround suppression (Adesnik et al. 2012), in which SSTs suppress the local activity dependent on a predictive surround.”

On a related note, SSTs are thought to primarily target the apical dendrite, while PVs mediate perisomatic inhibition, so the different roles of the interneurons in the model make sense, particularly for negative PE neurons, where a top-down excitatory predicted mean is first subtractively compared with the sensory input, s, prior to division by the variance. However, sensory input is typically thought of as arising ’bottom-up’, via layer 4, so the model may match the circuit anatomy less in the case of positive PE neurons, where the diagram shows ’s’ arising in a top-down manner. Do the authors have a justification for this choice?

We agree that ‘s’ is a bottom-up input and should have been more clear about that we do not consider ‘s’ to be a top-down input like the prediction. We hence adjusted the figure correspondingly and added a few clarifying sentences to the manuscript. The reviewer, however, raises an important point, which is not talked about enough. Namely, that if the bottom-up input ‘s’ comes from L4, how can it be compared in a subtractive manner with the top-down prediction arriving in the superficial layers? In Attinger et al. it was shown that the visual stimulus had subtractive effects on SST neurons. The axonal fibers delivering the stimulus information are hence likely to arrive in the vicinity of the apical dendrites, where SSTs target pyramidal cells. Hence, those axons delivering stimulus information could also target the apical dendrites of pyramidal cells. As the reviewer probably had in mind, L4 input tends to arrive in the somatic layer. However, there are also stimulus-responsive cells in layer 2/3, such that the stimulus information does not need to come directly from L4, it could be relayed via those stimulus-responsive layer 2/3 cells. It has been shown that L2/3→L3 axons are mostly located in the upper basal dendrites and the apical oblique dendrites, above the input from L4 (Petreanu et al. The subcellular organization of neocortical excitatory connections). Hence, stimulus information could arrive on the apical dendrites, and be subtractively modulated by SSTs. We would also like to note that the model does not take into account the precise dendritic location of the inputs. The model only assumes that the difference between stimulus and prediction is calculated before the divisive modulation by the variance.

In cortical circuits, assuming a 2:8 ratio of inhibitory to excitatory neurons, there are at least 10 pyramidal neurons to each SST and PV neuron. Pyramidal neurons are also typically much more selective about the type of sensory stimuli they respond to compared to these interneuron classes (e.g., Kerlin et al., 2012, Neuron). A nice feature of the proposed model is that the same interneurons can provide predictions of the mean and variance of the stimulus in a predictor-dependent manner. However, in a scenario where you have two types of sensory stimulus to predict (e.g., two different whiskers stimulated), with pyramidal neurons selective for prediction errors in one or the other, what does the model predict? Would you need specific SST and PV circuits for each type of predicted stimulus?

If we understand correctly, this would be a scenario in which the same context (e.g., sound) is predicting two types of sensory stimulus. In that case, one may need specific SST and PV circuits for the different error neurons selective for prediction errors in these stimuli, depending on how different the predictions are for the two stimuli as we elaborate in the following. The reviewer is raising an important point here and that is why we added a section to the discussion elaborating on it.

We think that there is a reason why interneurons are less selective than pyramidal cells and that this is also a feature in prediction error circuits. Similarly-tuned cells are more connected to each other, because they tend to be activated together as the stimuli they encode tend to be present in the environment together. Also, error neurons selective to nearby whiskers are more likely to receive similar stimulus information, and hence similar predictions. Hence, because nearby whiskers are more likely to be deflected similarly, a circuit structure may have developed during development such that neurons selective for prediction errors of nearby whiskers, may receive inputs from the same inhibitory interneurons. In that case, the same SST and PV cells could innervate those different neurons. If, however, the sensory stimuli to be predicted are very different, such that their representations are likely to be located far away from each other, then it also makes sense that the predictions for those stimuli are more diverse, and hence the error neurons selective to these are unlikely to be innervated by the same interneurons.

We added a shorter version of this to the discussion: “The lower selectivity of interneurons in comparison to pyramidal cells could be a feature in prediction error circuits. Error neurons selective to similar stimuli are more likely to receive similar stimulus information, and hence similar predictions. Therefore, a circuit structure may have developed such that prediction error neurons with similar selectivity may receive inputs from the same inhibitory interneurons.”

Reviewer 2 (Public Review):

Summary: This computational modeling study addresses the observation that variable observations are interpreted differently depending on how much uncertainty an agent expects from its environment. That is, the same mismatch between a stimulus and an expected stimulus would be less significant, and specifically would represent a smaller prediction error, in an environment with a high degree of variability than in one where observations have historically been similar to each other. The authors show that if two different classes of inhibitory interneurons, the PV and SST cells, (1) encode different aspects of a stimulus distribution and (2) act in different (divisive vs. subtractive) ways, and if (3) synaptic weights evolve in a way that causes the impact of certain inputs to balance the firing rates of the targets of those inputs, then pyramidal neurons in layer 2/3 of canonical cortical circuits can indeed encode uncertainty-modulated prediction errors. To achieve this result, SST neurons learn to represent the mean of a stimulus distribution and PV neurons its variance.

The impact of uncertainty on prediction errors is an understudied topic, and this study provides an intriguing and elegant new framework for how this impact could be achieved and what effects it could produce. The ideas here differ from past proposals about how neuronal firing represents uncertainty. The developed theory is accompanied by several predictions for future experimental testing, including the existence of different forms of coding by different subclasses of PV interneurons, which target different sets of SST interneurons (as well as pyramidal cells). The authors are able to point to some experimental observations that are at least consistent with their computational results. The simulations shown demonstrate that if we accept its assumptions, then the authors’ theory works very well: SSTs learn to represent the mean of a stimulus distribution, PVs learn to estimate its variance, firing rates of other model neurons scale as they should, and the level of un- certainty automatically tunes the learning rate, so that variable observations are less impactful in a high uncertainty setting.

Strengths: The ideas in this work are novel and elegant, and they are instantiated in a progression of simulations that demonstrate the behavior of the circuit. The framework used by the authors is biologically plausible and matches some known biological data. The results attained, as well as the assumptions that go into the theory, provide several predictions for future experimental testing.

Weaknesses: Overall, I found this manuscript to be frustrating to read and to try to understand in detail, especially the Results section from the UPE/Figure 4 part to the end and parts of the Methods section. I don’t think the main ideas are so complicated, and it should be possible to provide a much clearer presentation.

For me, one source of confusion is the comparison across Figure 1EF, Figure 2A, Figure 3A, Figure 4AB, and Figure 5A. All of these are meant to be schematics of the same circuit (although with an extra neuron in Figure 5), yet other than Figures 1EF and 4AB, no two are the same! There should be a clear, consistent schematic used, with identical labeling of input sources, neuron types, etc. across all of these panels.

We changed all figures to make them more consistent and pointed out that we consider subparts of the circuit.

The flow of the Results section overall is clear until the “Calculation of the UPE in Layer 2/3 error neurons” and Figure 4, where I find that things become significantly more confusing. The mention of NMDA and calcium spikes comes out of the blue, and it’s not clear to me how this fits into the authors’ theory. Moreover: Why would this property of pyramidal cells cause the PV firing rate to increase as stated? The authors refer to one set of weights (from SSTs to UPE) needing to match two targets (weights from s to UPE and weights from mean representation to UPE); how can one set of weights match two targets? Why do the authors mention “out-of-distribution detection’ here when that property is not explored later in the paper? (see also below for other comments on Figure 4)

We agree that the introduction of NMDA and calcium spikes was too short and understand that it was confusing. We therefore modified and expanded the section. To answer the two specific questions: First, Why would this property of pyramidal cells cause the PV firing rate to increase as stated? This property of pyramidal cells does not cause the PV firing rate to increase. When for example in positive error neurons, the mean input increases, then the PVs receive higher stimulus input on average, which is not compensated by the inhibitory prediction (which is still at the old mean), such that the PV firing rate increases. Due to the nonlinear integration in PVs, the firing rate can increase a lot and inhibit the error neurons strongly. If the error neurons integrate the difference nonlinearly, they compensate for the increased inhibition by PVs. In Figure 5, we show that a circuit in which error neurons exhibit a dendritic nonlinearity matches an idealised circuit in which the PVs perfectly represent the variance. We modified the text to clarify this.

Second, how can one set of weights match two targets? In our model, one set of weights does not need to match two targets. We apologise that this was written in such a confusing way. In positive error neurons, the inhibitory weights from the SSTs need to match the excitatory weights from the stimulus, and in negative error neurons, the inhibitory weights from the SSTs need to match the excitatory weights from the prediction. The weights in positive and negative circuits do not need to be the same. So, on a particular error neuron, the inhibition needs to match the excitation to maintain EI balance. Given experimental evidence for EI balance and heterosynaptic plasticity, we think that this constraint is biologically achievable. The inhibitory and excitatory synapses that need to match are targeting the same postsynaptic neuron and could hence have access to their postsynaptic effect. We modified the text to be more clear. Finally, we omitted the mentioning of out-of-distribution detection, see our reply below.

Coming back to one of the points in the previous paragraph: How realistic is this exact matching of weights, as well as the weight matching that the theory requires in terms of the weights from the SSTs to the PVs and the weights from the stimuli to the PVs? This point should receive significant elaboration in the discussion, with biological evidence provided. I would not advocate for the authors’ uncertainty prediction theory, despite its elegant aspects, without some evidence that this weight matching occurs in the brain. Also, the authors point out on page 3 that unlike their theory, “...SSTs can also have divisive effects, and PVs can have subtractive effects, dependent on circuit and postsynaptic properties”. This should be revisited in the Discussion, and the authors should explain why these effects are not problematic for their theory. In a similar vein, this work assumes the existence of two different populations of SST neurons with distinct UPE (pyramidal) targets. The Discussion doesn’t say much about any evidence for this assumption, which should be more thoroughly discussed and justified.

These are very important points, we agree that the biological plausibility of the model’s predictions should be discussed and hence expanded the discussion with three new paragraphs:

To enable the comparison between predictions and sensory information via subtractive inhibition, we pointed out that the weights of those inputs on the postsynaptic neuron need to match. This essentially means that there needs to be a balance of excitatory and inhibitory inputs. Such an EI balance has been observed experimentally (Tan and Wehr, 2009). And it has previously been suggested that error responses are the result of breaking this EI balance (Hertäg und Sprekeler, 2020, Barry and Gerstner, 2024). Heterosynaptic plasticity is a possible mechanism to achieve EI balance (Field et al. 2020). For example, spike pairing in pre- and postsynaptic neurons induces long-term potentiation at co-activated excitatory and inhibitory synapses with the degree of inhibitory potentiation depending on the evoked excitation (D’amour and Froemke, 2015), which can normalise EI balance (Field et al. 2020).

In the model we propose, SSTs should be subtractive and PVs divisive. However, SSTs can also be divisive, and PVs subtractive dependent on circuit and postsynaptic properties (Seybold et al. 2015, Lee et al. 2012, Dorsett et al. 2021). This does not necessarily contradict our model, as circuits in which SSTs are divisive and PVs subtractive could implement a different function, as not all pyramidal cells are error neurons. Hence, our model suggests that error neurons which can calculate UPEs should have similar physiological properties to the layer 2/3 cells observed in the study by Wilson et al. 2012.

Our model further posits the existence of two distinct subtypes of SSTs in positive and negative error circuits. Indeed, there are many different subtypes of SSTs. SST is expressed by a large population of interneurons, which can be further subdivided. There is e.g. a type called SST44, which was shown to specifically respond when the animal corrects a movement (Green et al. 2023). Our proposal is hence aligned with the observation of functionally specialised subtypes of SSTs.

Finally, I think this is a paper that would have been clearer if the equations had been interspersed within the results. Within the given format, I think the authors should include many more references to the Methods section, with specific equation numbers, where they are relevant throughout the Results section. The lack of clarity is certainly made worse by the current state of the Methods section, where there is far too much repetition and poor ordering of material throughout.

We implemented the reviewer’s detailed and helpful suggestions on how to improve the ordering and other aspects of the methods section and now either intersperse the equations within the results or refer to the relevant equation number from the Methods section within the Results section.

Reviewer 3 (Public Review):

Summary: The authors proposed a normative principle for how the brain’s internal estimate of an observed sensory variable should be updated during each individual observation. In particular, they propose that the update size should be inversely proportional to the variance of the variable. They then proposed a microcircuit model of how such an update can be implemented, in particularly incorporating two types of interneurons and their subtractive and divisive inhibition onto pyramidal neurons. One type should represent the estimated mean while another represents the estimated variance. The authors used simulations to show that the model works as expected.

Strengths: The paper addresses two important issues: how uncertainty is represented and used in the brain, and the role of inhibitory neurons in neural computation. The proposed circuit and learning rules are simple enough to be plausible. They also work well for the designated purposes. The paper is also well-written and easy to follow.

Weaknesses: I have concerns with two aspects of this work.

(1) The optimality analysis leading to Eq (1) appears simplistic. The learning setting the authors describe (estimating the mean of a stationary Gaussian variable from a stream of observations) is a very basic problem in online learning/streaming algorithm literature. In this setting, the real “optimal” estimate is simply the arithmetic average of all samples seen so far. This can be implemented in an online manner with µˆt = µˆt−1 +(st −µˆt−1)/t. This is optimal in the sense that the estimator is always the maximum likelihood estimator given the samples seen up to time t. On the other hand, doing gradient descent only converges towards the MLE estimator after a large number of updates. Another critique is that while Eq (1) assumes an estimator of the mean (mˆu), it assumes that the variance is already known. However, in the actual model, the variance also needs to be estimated, and a more sophisticated analysis thus needs to take into account the uncertainty of the variance estimate and so on. Finally, the idea that the update should be inverse to the variance is connected to the well-established idea in neuroscience that more evidence should be integrated over when uncertainty is high. For example, in models of two-alternative forced choices it is known to be optimal to have a longer reaction time when the evidence is noisier.

We agree with the reviewer that the simple example we gave was not ideal, as it could have been solved much more elegantly without gradient descent. And the reviewer correctly pointed out that our solution was not even optimal. We now present a better example in Figure 7, where the mean of the Gaussian variable is not stationary. Indeed, we did not intend to assume that the Gaussian variable is stationary, as we had in mind that the environment can change and hence also the Gaussian variable. If the mean is constant over time, it is indeed optimal to use the arithmetic mean. However, if the mean changes after many samples, then the maximum likelihood estimator model would be very slow to adapt to the new mean, because t is large and each new stimulus only has a small impact on the estimate. If the mean changes, uncertainty modulation may be useful: if the variance was small before, and the mean changes, then the resulting big error will influence the change in the estimate much more, such that we can more quickly learn the new mean. A combination of the two mechanisms would probably be ideal. We use gradient descent here, because not all optimisation problems the brain needs to solve are that simple. The problem with converging only after a large number of updates is a general problem of the algorithm. Here, we propose how the brain could estimate uncertainty to achieve the uncertainty-modulation observed in inference and learning tasks observed in behavioural studies. To give a more complex example, we present in a new Figure 8 how a hierarchy of UPE circuits can be used for uncertainty-based integration of prior and sensory information, similar to Bayes-optimal integration.

Yes, indeed, there is well-known behavioural evidence, we would like to thank the reviewer for pointing out this connection to two-alternative forced choice tasks. We now cite this work. Our contribution is not on the already established computational or algorithmic level, but the proposal of a neural implementation of how uncertainty could modulate learning. The variance indeed needs to be estimated for optimal mean updating. That means that in the beginning, there will be non-optimal updating until the variance is learned. However, once the variance is learned, mean-updating can use the learned variance. There may be few variance contexts but many means to be learned, such that variance contexts can be reused. In any case, this is a problem on the algorithmic level, and not so much on the implementational level we are concerned with.

(2) While the incorporation of different inhibitory cell types into the model is appreciated, it appears to me that the computation performed by the circuit is not novel. Essentially the model implements a running average of the mean and a running average of the variance, and gates updates to the mean with the inverse variance estimate. I am not sure about how much new insight the proposed model adds to our understanding of cortical microcircuits.

We here suggest an implementation for how uncertainty could modulate learning via influencing prediction error com- putation. Our model can explain how humans could estimate uncertainty and weight prior versus sensory information accordingly. The focus of our work was not to design a better algorithm for mean and variance estimation, but rather to investigate how specialised prediction error circuits in the brain can implement these operations to provide new experimental hypotheses and predictions.

Reviewer 1 (Recommendations For The Authors):

Clarity and conciseness are a strength of this manuscript, but a more comprehensive explanation could improve the reader’s understanding in some instances. This includes the NMDA-based nonlinearity of pyramidal neuron activation - I am a little unclear exactly what problem this solves and how (alongside the significance of 5D and E).

We agree that the introduction of the NMDA-based nonlinearity was too short and understand that it was confusing. We therefore modified and expanded the section, where we introduce the dendritic nonlinearity of the error neurons.

Page 5: I think there is a ’positive’ and ’negative’ missing from the following sentence: ’the weights from the SSTs to the UPE neurons need to match the weights from the stimulus s to the UPE neuron and from the mean representation to the UPE neuron, respectively.’

Thanks for pointing that out! We changed the sentence to be more clear to the following: “To ensure a comparison between the stimulus and the prediction, the inhibition from the SSTs needs to match the excitation it is compared to in the UPE neurons: In the positive PE circuit, the weights from the SSTs representing the prediction to the UPE neurons need to match the weights from the stimulus s to the UPE neurons. In the negative PE circuit, the weights from SSTs representing the stimulus to the negative UPE neurons need to match the weights from the mean representation to the UPE neurons, respectively.”

Reviewer 2 (Recommendations For The Authors):

Related to the first point above: I don’t feel that the authors adequately explained what the “s” and “a” information (e.g., in Figures 2A, 3A) represent, where they are coming from, what neurons they impact and in what way (and I believe Fig. 3A is missing one “a” label). I think they should elaborate more fully on these key, foundational details for their theory. To me, the idea of starting from the PV, SST, and pyramidal circuit, and then suddenly introducing the extra R neuron in Figure 5, just adds confusion. If the R neuron is meant to be the source, in practice, of certain inputs to some of the other cell types, then I think that should be included in the circuit from the start. Perhaps a good idea would be to start with two schematics, one in the form of Figure 5A (but with additional labeling for PV, SST) and one like Figure 1EF (but with auditory inputs as well), with a clear indication that the latter is meant to represent a preliminary, reduced form of the former that will be used in some initial tests of the performance of the PV, SST, UPE part of the circuit. Related to the Methods, I also can give a list of some specific complaints (in latex):

(1) φ, φP V are used in equations (10), (11), so they should be defined there, not many equations later.

Thank you, we changed that.

(2) β, 1 − β appear without justification or explanation in (11). That is finally defined and derived several pages later.

Thank you, we now define it right at the beginning.

(3) Equations (10)-(12) should be immediately followed by information about plasticity, rather than deferring that.

That’s a great idea. We changed it. Now the synaptic dynamics are explained together with the firing rate dynamics.

(4) After the rate equations (10)-(12) and weight change equations (23)-(25) are presented, the same equations are simply repeated in the “Explanation of the synaptic dynamics” subsection.

We agree that this was suboptimal. We moved the explanation of the synaptic dynamics up and removed the repetition.

(5) In the circuit model (13)-(19), it’s not clear why rR shows up in the SST+ and PV− equations vs. rs in PV+ and SST−. Moreover, rs is not even defined! Also, I don’t see why wP V +,R shows up in the equation for rP V − .

We added more explanation to the Methods section as to why the neurons receive these inputs and renamed rs to s, which is defined. The “+” in wP V +,R was a typo. Thank you for spotting that.

(6) The authors should only number those equations that they will reference by number. Even more importantly, there are many numbers such as (20), (26), (32), (39) that are just floating there without referring to an equation at all.

Thank you for spotting that. We corrected this.

(7) The authors fail to specify what is ra in Figure 8. Moreover, it seems strange to me that wP V,a approaches σ rather than wP V,ara approaching σ, since φP V is a function of wP V,ara.

You are right, wP V,ara should approach σ, but since ra is either 1 or 0 to indicate the presence of absence of the cue, and only wP V,a is plastic and changing„ wP V,a approaches σ.

(8) I don’t understand the rationale for the authors to introduce equation. (30) when they already had plasticity equations earlier. What is the relation of (30), (31) to (24)?

It is the same equation. In 30 we introduce simpler symbols for a better overview of the equations. 31 is equal to 30, with rP V replaced by it’s steady state.

(9) η is omitted from (33) - it won’t affect the final result but should be there.

We fixed this.

I have many additional specific comments and suggestions, some related to errors that really should have been caught before manuscript submission. I will present these based on the order in which they arise in the manuscript.

(1) In the abstract, the mention of layer 2/3 comes out of nowhere. Why this layer specifically? Is this meant to be an abstract/general cortical circuit model or to relate to a specific brain area? (Also watch for several minor grammatical issues in the abstract and later.)

Thank you for pointing this out. We now mention that the observed error neurons can be found in layer 2/3 of diverse brain areas. It is meant to be a general cortical circuit model independent of brain area.

(2) In par. 2 of the introduction, I find sentences 3-4 to be confusing and vague. Please rewrite what is meant more directly and clearly.

We tried to improve those sentences.

(3) Results subtitle 1: “suggests” → “suggest”

Thank you.

(4) Be careful to use math font whenever variables, such as a and N, are referenced (e.g., use of a instead of a bottom pg. 2).

We agree and checked the entire manuscript.

(5) Ref. to Fig. 1B bottom pg. 2 should be Fig. 1CD. The panel order in the figure should then be changed to match how it is referenced.

We fixed it and matched the ordering of the text with the ordering of the figure.

(6) Fig. 2C and 3E captions mention std but this is not shown in the figures - should be added.

It is there, it is just very small.

(7) Please clarify the relation of Figure 2C to 2F, and Figure 3F to 3H.

We colour-coded the points in 2F that correspond to the bars in 2C. We did the same for 3F and 3H.

(8) Figures 3E,3F appear to be identical except for the y-axis label and inclusion of std in 3F. Either more explanation is needed of how these relate or one should be cut.

The difference is that 3E shows the activity of PVs based on only the sound cue in the absence of a whisker stimulus. And 3F shows the activity of PVs based on both the sound cue and whisker stimuli. We state this more clearly now.

(9) Bottom of pg. 4: clarify that a quadratic φP V is a model assumption, not derived from results in the figure.

We added that we assume this.

(10) When k is referenced in the caption of Figure 4, the reader has no idea what it is. More substantially, most panels of Figure 4 are not referenced in the paper. I don’t understand what point the authors are trying to make here with much of this figure. Indeed, since the claim is that the uncertainy prediction should be based on division by σ2, why aren’t the numerical values for UPE rates much larger, since σ gets so small? The authors also fail to give enough details about the simulations done to obtain these plots; presumably these are after some sort of (unspecified) convergence, and in response to some sort of (unspecified) stimulus? Coming back to k, I don’t understand why k > 2 is used in addition to k = 2. The text mentions – even italicizes – “out-of-distribution dectection’, but this is never mentioned elsewhere in the paper and seems to be outside the true scope of the work (and not demonstrated in Figure 4). Sticking with k = 2 would also allow authors to simply use (·)k below (10), rather than the awkward positive part function that they have used now.

We now introduce the equation for the error neurons in Eq. 3 within the text, such that k is introduced before the caption. It also explains why the numerical values do not become much larger. Divisive inhibition, unlike mathematical division, cannot lead to multiplication in neurons. To ensure this, we add 1 to the denominator.

We show the error neuron responses to stimuli deviating from the learned mean after learning the mean and variance. The deviation is indicated either on the x-axis or in the legend depending on the plot. We now more explicitly state that these plots are obtained after learning the mean and the variance.

We removed the mentioning of the “out-of-distribution detection” as a detailed treatment would indeed be outside of the scope.

(11) Page 5, please clarify what is meant by “weights from the sound...”. You have introduced mathematical notation - use it so that you can be precise.

We added the mathematical notation, thank you!

(12) Figure 5D: legend has 5 entries but the figure panel only plots 4 quantities.

The SST firing rate was below the R firing rate. We hence omitted the SST firing rate and its legend.

(13) Figure 5: I don’t understand what point is being made about NMDA spikes. The text for Figure 5 refers to NMDA spikes in Figure 4, but nothing was said about NMDA spikes in the text for Figure 4 nor shown in Figure 4 itself.

We were referring to the nonlinearity in the activation function of UPEs in Figure 4. We changed the text to clarify this point.

(14) Figure 6: It is too difficult to distinguish the black and purple curves even on a large monitor. Also, the authors fail to define what they mean by “MM” and also do not define the quantities Y+ and Y− that they show. Another confusing aspect is that the model has PV+ and PV− neurons, so why doesn’t the figure?

Thank you for the comment. We changed the colour for better visibility, replaced the Upsilons with UPE (we changed the notation at some point and forgot to change it in the figure), and defined MM, which is the mismatch stimulus that causes error activity. We did not distinguish between PV+ and PV− in the plot as their activity is the same on average. We plotted the activity of the PV+. We now mention that we show the activity of PV+ as the representative.

(15) Also Figure 6: The authors do not make it clear in the text whether these are simulation results or cartoons. If the latter, please replace this with actual simulation results.

They are actual simulation results. We clarified this in the text.

(16) This work assumes the existence of two different populations of SST neurons with distinct UPE (pyramidal) targets. The Discussion doesn’t say much about any evidence for this assumption, which should be more thoroughly discussed and justified.

We now discuss this in more detail in the discussion as mentioned in our response to the public review.

(17) Par. 2 of the discussion refers to “Bayesian” and “Bayes-optimal” several times. Nothing was said earlier in the paper about a Bayesian framework for these results and it’s not clear what the authors mean by referring to Bayes here. This paragraph needs editing so that it clearly relates to the material of the results section and its implications.

We added an additional results section (the last section with Figure 8) on integrating prior and sensory information based on their uncertainties, which is also the case for Bayes-optimal integration, and show that our model can reproduce the central tendency effect, which is a hallmark of Bayes-optimal behaviour.

Reviewer 3 (Recommendations For The Authors):

See public review. I think the gradient-descent type of update the authors do in Equation (1) could be more useful in a more complicated learning scenario where the MLE has no closed form and has to be computed with gradient-based algorithms.

We responded in detail to your points in our point-by-point response to the public review.

  1. Howard Hughes Medical Institute
  2. Wellcome Trust
  3. Max-Planck-Gesellschaft
  4. Knut and Alice Wallenberg Foundation