# Abstract

The central tendency bias, or contraction bias, is a phenomenon where the judgment of the magnitude of items held in working memory appears to be biased towards the average of past observations. It is assumed to be an optimal strategy by the brain, and commonly thought of as an expression of the brain’s ability to learn the statistical structure of sensory input. On the other hand, recency biases such as serial dependence are also commonly observed, and are thought to reflect the content of working memory. Recent results from an auditory delayed comparison task in rats, suggest that both biases may be more related than previously thought: when the posterior parietal cortex (PPC) was silenced, both short-term and contraction biases were reduced. By proposing a model of the circuit that may be involved in generating the behavior, we show that a volatile working memory content susceptible to shifting to the past sensory experience – producing short-term sensory history biases – naturally leads to contraction bias. The errors, occurring at the level of individual trials, are sampled from the full distribution of the stimuli, and are not due to a gradual shift of the memory towards the sensory distribution’s mean. Our results are consistent with a broad set of behavioral findings and provide predictions of performance across different stimulus distributions and timings, delay intervals, as well as neuronal dynamics in putative working memory areas. Finally, we validate our model by performing a set of human psychophysics experiments of an auditory parametric working memory task.

**eLife assessment**

This is an **important** study about the mechanisms underlying our capacity to represent and hold recent events in our memory and how they are influenced by past experiences. A key aspect of the model put forward here is the presence of discrete jumps in neural activity with the posterior parietal region of the cortex. The strength of evidence is largely **solid**, with some weaknesses noted in the methodology. Both reviewers suggested ways in which this aspect of the model can to be tested further and resolve conflicts with previously published experimental results, in particular the study by Papadimitriou et al 2014 in Journal of Neurophysiology.

# Introduction

A fundamental question in neuroscience relates to how brains efficiently process the statistical regularities of the environment to guide behavior. Exploiting such regularities may be of great value to survival in the natural environment, but may lead to biases in laboratory tasks. Repeatedly observed across species and sensory modalities is the central tendency (“contraction”) bias, where performance in perceptual tasks seemingly reflects a shift of the working memory representation towards the mean of the sensory history [1–6]. Equally common are sequential biases, either attractive or repulsive, towards the immediate sensory history [7, 5, 8–14, 6, 15].

It is commonly thought that these biases occur due to disparate mechanisms - contraction bias is widely thought to be a result of learning the statistical structure of the environment, whereas serial biases are thought to reflect the contents of working memory [16, 17]. Recent evidence, however, challenges this picture: our recent study of a parametric working memory task discovered that the rat posterior parietal cortex (PPC) plays a key role in modulating contraction bias [7]. When the region is optogenetically inactivated, contraction bias is attenuated. Intriguingly, however, this is also accompanied by the suppression of bias effects induced by the recent history of the stimuli, suggesting that the two phenomena may be interrelated. Interestingly, other behavioral components, including working memory of immediate sensory stimuli (in the current trial), remain intact. In another recent study with humans, a double dissociation was reported between three cohorts of subjects: subjects on the autistic spectrum (ASD) expressed reduced biases due to recent statistics, whereas dyslexic subjects (DYS) expressed reduced biases towards long-term statistics, relative to neurotypical subjects (NT) [16]. Finally, further complicating the picture is the observation of not only attractive serial dependency, but also repulsive biases [18]. It is as of yet unclear how such biases occur and what mechanisms underlie such history dependencies.

These findings beg the question of whether contraction bias and the different types of serial biases are actually related, and if so, how. Although normative models have been proposed to explain these effects [19, 18, 16], the neural mechanisms and circuits underlying them remain poorly understood. We address this question through a model of the putative circuit involved in giving rise to the behavior observed in [7]. Our model consists of two continuous (bump) attractor sub-networks, representing a working memory (WM) module and the PPC. Given the finding that PPC neurons carry more information about stimuli presented during previous trials, the PPC module integrates inputs over a longer timescale relative to the WM network, and incorporates firing rate adaptation.

We find that both contraction bias and short-term sensory history effects emerge in the WM network as a result of inputs from the PPC network. Importantly, we see that these effects do not necessarily occur due to separate mechanisms. Rather, in our model, contraction bias emerges as a statistical effect of errors in working memory, occurring due to the persisting memory of stimuli shown in the preceding trials. The integration of this persisting memory in the WM module competes with that of the stimulus in the current trial, giving rise to short-term history effects. We conclude that contraction biases in such paradigms may not necessarily reflect explicit learning of regularities or an “attraction towards the mean”, on individual trials. Rather, it may be an effect emerging at the level of average performance, when in each trial, errors are made according to the recent sensory experiences whose distribution follow that of the input stimuli. Furthermore, using the same model, we also show that the biases towards long-term (short-term) statistics inferred from the performance of ASD (DYS) subjects [16] may actually reflect short-term biases extending more or less into the past with respect to NT subjects, challenging the hypothesis of a double-dissociation mechanism. Last, we show that as a result of neuronal integration of inputs and adaptation, in addition to attraction effects occurring on a short timescale, repulsion effects are observed on a longer timescale [18].

We make specific predictions on neuronal dynamics in the PPC and downstream working memory areas, as well as how contraction bias may be altered, upon manipulations of the sensory stimulus distribution, inter-trial and inter-stimulus delay intervals. We show agreements between the model and our previous results in humans and rats. Finally, we test our model predictions by performing new human auditory parametric working memory tasks. The data is in agreement with our model and not with an aternative Bayesian model.

# 1 Results

## 1.1 The PPC as a slower integrator network

Parametric working memory (PWM) tasks involve the sequential comparison of two graded stimuli that differ along a physical dimension and are separated by a delay interval of a few seconds (Fig. 1 A and B) [20, 7, 19]. A key feature emerging from these studies is contraction bias, where the averaged performance is as if the memory of the first stimulus progressively shifts towards the center of a prior distribution built from past sensory history (Fig. 1 C). Additionally, biases towards the most recent sensory stimuli (immediately preceding trials) have also been documented [7, 5].

In order to investigate the circuit mechanisms by which such biases may occur, we use two identical one-dimensional continuous attractor networks to model WM and PPC modules. Neurons are arranged according to their preferential firing locations in a continuous stimulus space, representing the amplitude of auditory stimuli. Excitatory recurrent connections between neurons are symmetric and a monotonically decreasing function of the distance between the preferential firing fields of neurons, allowing neurons to mutually excite one another; inhibition, instead, is uniform. Together, both allow a localized bump of activity to form and be sustained (Fig. 1 D and E) [21–29]. Both networks have free boundary conditions. Neurons in the WM network receive inputs from neurons in the PPC coding for the same stimulus amplitude (Fig. 1 D). Building on experimental findings [30–32, 7], we designed the PPC network such that it integrates activity over a longer timescale compared to the WM network (Sect. 3.1). Moreover, neurons in the PPC are equipped with neural adaptation, that can be thought of as a threshold that dynamically follows the activation of a neuron over a longer timescale.

To simulate the parametric WM task, at the beginning of each trial, the network is provided with a stimulus *s*_{1} for a short time via an external current *I*_{ext} as input to a set of neurons (see Tab. 1). Following *s*_{1}, after a delay interval, a second stimulus *s*_{2} is presented (Fig. 1 E). The pair (*s*_{1}, *s*_{2}) is drawn from the stimulus set shown in Fig. 1 B, where they are all equally distant from the diagonal *s*_{1} = *s*_{2}, and are therefore of equal nominal discrimination, or difficulty. The stimuli (*s*_{1}, *s*_{2}) are co-varied in each trial so that the task cannot be solved by relying on only one of the stimuli [33]. As in the study in Ref. [7] using an interleaved design, across consecutive trials, the inter-stimulus delay intervals are randomized and sampled uniformly between 2, 6 and 10 seconds.

We additionally include psychometric pairs (indicated in the box in Fig. 1 B) where the distance to the diagonal, hence the discrimination difficulty, is varied. The task is a binary comparison task that aims at classifying whether *s*_{1} *< s*_{2} or vice-versa. In order to solve the task, we record the activity of the WM network at two time points: slightly before and after the onset of *s*_{2} (Fig. 1 E). We repeat this procedure across many different trials, and use the recorded activity to assess performance (see (Sect. 3.2) for details). Importantly, at the end of each trial, the activity of both networks is not re-initialized, and the state of the network at the end of the trial serves as the initial network configuration for the next trial.

## 1.2 Contraction bias and short-term stimulus history effects as a result of PPC network activity

Probing the WM network performance on psychometric stimuli (Fig. 1 B, purple box, 10% of all trials) shows that the comparison behavior is not error-free, and that the psychometric curves (different colors) differ from the optimal step function (Fig. 2 A, green dashed line). Additionally, previous work has shown that the performance worsens as a function of the inter-stimulus delay interval [34, 7]. In our model, errors are caused by the displacement of the activity bump in the WM network, due to the inputs from the PPC network. These displacements in the WM activity bump can result in different outcomes: by displacing it *away* from the second stimulus, they either do not affect the performance or improve it (Fig. 2 B right panel, “Bias+”), if noise is present. Conversely, the performance can suffer, if the displacement of the activity bump is *towards* the second stimulus (Fig. 2 B left panel, “Bias−”). Performance on stimulus pairs that are equally distant from the *s*_{1} = *s*_{2} diagonal can be similarly impacted and the network produces a pattern of errors that is consistent with contraction bias: performance is at its minimum for stimulus pairs in which *s*_{1} is either largest or smallest, and at its maximum for stimulus pairs in which *s*_{2} is largest or smallest (Fig. 2 C, left panel) [19, 35, 7, 36, 37]. These results are consistent with the performance of humans and rats on the auditory task, as previously reported (Fig. 2 C, middle and right panels, data from Akrami et al 2018 [7]).

Can the same circuit also give rise to short-term sensory history biases [7, 38]? We analyzed the fraction of trials the network response was “*s*_{1} *< s*_{2}” in the current trial conditioned on stimulus pairs presented in the previous trial, and we found that the network behavior is indeed modulated by the preceding trial’s stimulus pairs (Fig. 2 D, panel 1). We quantified these history effects as well as how many trials back they extend to. We computed the bias by plotting, for each particular pair (of stimuli) presented at the current trial, the fraction of trials the network response was “*s*_{1} *< s*_{2}” as a function of the pair presented in the previous trial minus the mean performance over all previous trial pairs (Fig. 2 D, panel 2) [7]. Independent of the current trial, the previous trial exerts an “attractive” effect, expressed by the negative slope of the line: when the previous pair of stimuli is small, *s*_{1} in the current trial is, on average, misclassified as smaller than it actually is, giving rise to the attractive bias in the comparison performance; the converse holds true when the previous pair of stimuli happens to be large. These effects extend to two trials back, and are consistent with the performance of humans and rats on the auditory task (Fig. 2 D, panels 3-6, data from Akrami et al 2018 [7]).

It has been shown that inactivating the PPC, in rats performing the auditory delayed comparison task, markedly reduces the magnitude of contraction bias, without impacting non-sensory biases [7]. We assay the causal role of the PPC in generating the sensory history effects as well as contraction bias by weakening the connections from the PPC to the WM network, mimicking the inactivation of the PPC. In this case, we see that the performance for the psychometric stimuli is greatly improved (yellow curve, Fig. 2 E, top panel), consistent also with the inactivation of the PPC in rodents (yellow curve, Fig. 2 E, bottom panel, data from Akrami et al 2018 [7]). Performance is improved also for all pairs of stimuli in the stimulus set (Fig. S3 A). The breakdown of the network response in the current trial conditioned on the specific stimulus pair preceding it reveals that the previous trial no longer exerts a notable modulating effect on the current trial (Fig. S3 B). Quantifying this bias by subtracting the mean performance over all of the previous pairs reveals that the attractive bias is virtually eliminated (yellow curve, Fig. 2 F, left panel), consistent with findings in rats (Fig. 2 F, right panel, data from Akrami et al 2018 [7]).

Together, our results suggest a possible circuit through which both contraction bias and short-term history effects in a parametric working memory task may arise. The main features of our model are two continuous attractor networks, both integrating the same external inputs, but operating over different timescales. Crucially, the slower one, a model of the PPC, includes neuronal adaptation, and provides input to the faster one, intended as a WM circuit. In the next section, we show how the slow integration and firing rate adaptation in the PPC network give rise to the observed effects of sensory history.

## 1.3 Multiple timescales at the core of short-term sensory history effects

The activity bumps in the PPC and WM networks undergo different dynamics, due to the different timescales with which they integrate inputs, the presence of adaptation in the PPC, and the integration of incoming inputs from the PPC to the WM network. The WM network integrates inputs over a shorter timescale, and therefore the activity bump follows the external input with high fidelity (Fig. 3 A (purple bumps) and B (purple line)). The PPC network, instead, has a longer integration timescale, and therefore fails to sufficiently integrate the input to induce a displacement of the bump to the location of a new stimulus, at each single trial. This is mainly due to the competition between the inputs from the recurrent connections sustaining the bump, and the external stimuli that are integrated elsewhere: if the former is stronger, the bump is not displaced, otherwise it is displaced to the new location. The external input, as well as the presence of adaptation (Fig. S1 B and C) induce a small continuous drift of the activity bump that is already present from the previous trials (Fig. 3 A (pink bumps) and B (pink line)) [39]. The build-up of adaptation in the PPC network, combined with the global inhibition from other neurons receiving external inputs, can extinguish the bump in that location (see also Fig. S1 for more details). Following this, the PPC network can make a transition to an incoming stimulus position (that may be either *s*_{1} or *s*_{2}), and a new bump is formed. The resulting dynamics in the PPC is a mixture of slow drift over a few trials, followed by occasional jumps (Fig. 3 A).

As a result of this dynamics, relative to the WM network, the activity bump in the PPC represents the stimuli corresponding to the current trial in a fewer fraction of the trials, and represents stimuli presented in the previous trial in a larger fraction of the trials (Fig. 3 C). This yields short-term sensory history effects in our model (Fig. 2 D, and E), as input from the PPC lead to the displacement of the WM bump to other locations (Fig. 3 D). Given that neurons in the WM network integrate this input, once it has built up sufficiently, it can surpass the self-sustaining inputs from the recurrent connections in the WM network. The WM bump, then, can move to a new location, given by the position of the bump in the PPC (Fig. 3 D). As the input from the PPC builds up gradually, the probability of bump displacement in WM increases over time. This in return leads to an increased probability of contraction bias (Fig. 3 E), and short-term history (one-trial back) biases (Fig. 3 F), as the inter-stimulus delay interval increases.

Additionally, a non-adapted input from the PPC has a larger likelihood of displacing the WM bump. This is highest immediately following the formation of a new bump in the PPC, or in other words, following a “bump jump” (Fig. 3 F). As a result, one can reason that those trials immediately following a jump in the PPC are the ones that should yield the maximal bias towards stimuli presented in the previous trial. We therefore separated trials according to whether or not a jump has occurred in the PPC in the preceding trial (we define a jump to have occurred if the bump location across two consecutive trials in the PPC is displaced by an amount larger than the typical width of the bump (Sect. 3.1)). In line with this reasoning, only the set that included trials with jumps in the preceding trial yields a one-trial back bias (Fig. 3 G).

Removing neuronal adaptation entirely from the PPC network further corroborates this result. In this case, the network dynamics shows a very different behavior: the activity bump in the PPC undergoes a smooth drift (Fig. S2 A) [40]. In this regime, there are no jumps in the PPC, and the activity bump corresponds to the stimuli presented in the previous trial in a fewer fraction of the trials (Fig. S2 B), relative to when adaptation is present (Fig. 3 B). As a result, no short-term history effects can be observed (Fig. S2 C), even though a strong contraction bias persists (Fig. S2 D).

As in the study in Ref.[7], we can further study the impact of the PPC on the dynamics of the WM network by weakening the weights from the PPC to the WM network, mimicking the inactivation of PPC (Fig. 2 E and F, Fig. S3 A and B). Under this manipulation, the trajectory of the activity bump in the WM network immediately before the onset of the second stimulus *s*_{2} closely follows the external input, consistent with an enhanced WM function (Fig. S3 C and D).

The drift-jump dynamics in our model of the PPC gives rise to short-term (notably one and two-trial back) sensory history effects in the performance of the WM network. In addition, we observe an equally salient contraction bias (bias towards the sensory mean) in the WM network’s performance, increasing with the delay period (Fig. 3 E). However, we find that the activity bump in both the WM and the PPC network corresponds to the mean over all stimuli in only a small fraction of trials, expected by chance (Fig. 3 B, see Sect. 4.1 for how it is calculated). Rather, the bump is located more often at the current trial stimulus , and to a lesser extent, at the location of stimuli presented at the previous trial . As a result, contraction bias in our model cannot be attributed to the representation of the running sensory average in the PPC. In the next section, we show how contraction bias arises as an averaged effect, when single trial errors occur due to short-term sensory history biases.

## 1.4 Errors are drawn from the marginal distribution of stimuli, giving rise to contraction bias

In order to illustrate the statistical origin of contraction bias in our network model, we consider a mathematical scheme of its performance (Fig. 4 A). In this simple formulation, we assume that the first stimulus to be kept in working memory, , is volatile. As a result, in a fraction *ϵ* of the trials, it is susceptible to replacement with another stimulus *ŝ* (by the input from the PPC, that has a given distribution *p _{m}* (Fig. 4 B)). However, this replacement does not always lead to an error, as evidenced by Bias- and Bias+ trials (i.e. those trials in which the performance is affected, negatively and positively, respectively (Fig. 2 B)). For each stimulus pair, the probability to make an error,

*p*, is integral of

_{e}*p*over values lying on the wrong side of the

_{m}*s*

_{1}=

*s*

_{2}diagonal (Fig. 4 C). For instance, for stimulus pairs below the diagonal (Fig. 4 C, blue squares) the trial outcome is erroneous only if

*ŝ*is displaced above the diagonal (red part of the distribution). As one can see, the area above the diagonal increases as

*s*

_{1}increases, giving rise to a gradual increase in error rates (Fig. 4 C). This mathematical model can capture the performance of the attractor network model, as can be seen through the fit of the network performance, when using the bump distribution in the PPC as

*p*, and

_{m}*ϵ*as a free parameter (see Eq. 9 in Sect. 4.2, Fig. 4 D, E).

Can this simple statistical model also capture the behavior of rats and humans (Fig. 2 C)? We carried out the same analysis for rats and humans, by replacing the bump location distribution of PPC with that of the marginal distribution of the stimuli provided in the task, based on the observation that the former is well-approximated by the latter (Fig. 4 B). In this case, we see that the model roughly captures the empirical data (Fig. 4 F and G), with the addition of another parameter *δ* that accounts for the lapse rate. Interestingly, such “lapse” also occurs in the network model (as seen by the small amount of errors for pairs of stimuli where *s*_{2} is smallest and largest (Fig. 4 E). This occurs because of the drift present in the PPC network, that eventually, for long enough delay intervals, causes the bump to arrive at the boundaries of the attractor, which would result in an error.

This simple analysis implies that contraction bias in the WM network in our model is not the result of the representation of the mean stimulus in the PPC, but is an effect that emerges as a result of the PPC network’s sampling dynamics, mostly from recently presented stimuli. Indeed, a “contraction to the mean” hypothesis only provides a general account of which pairs of stimuli should benefit from a better performance and which should suffer, but does not explain the gradual accumulation of errors upon increasing (decreasing) *s*_{1}, for pairs below (above) the *s*_{1} = *s*_{2} diagonal [35, 36, 7]. Notably, it cannot explain why the performance in trials with pairs of stimuli where *s*_{2} is most distant from the mean stand to benefit the most from it. All together, our model suggests that contraction bias may be a simple consequence of errors occurring at single trials, driven by inputs from the PPC that follow a distribution similar to that of the external input (Fig. 4 B).

## 1.5 Model predictions

### 1.5.1 The stimulus distribution impacts the pattern of contraction bias through its cumulative

In our model, the pattern of errors are determined by the cumulative distribution of stimuli from the correct decision boundary *s*_{1} = *s*_{2} to the left (right) for pairs of stimuli below (above) the diagonal (Fig. 4 C and Fig. S4 A). This implies that using a stimulus set in which this distribution is deformed makes different predictions for the gradient of performance across different stimulus pairs. A distribution that is symmetric (Fig. S4 A) yields an equal performance for pairs below and above the *s*_{1} = *s*_{2} diagonal (blue and red lines) when *s*_{1} is at the mean (as well as the median, given the symmetry of the distribution). A distribution that is skewed, instead, yields an equal performance when *s*_{1} is at the median for both pairs below and above the diagonal. For a negatively skewed distribution (Fig. S4 B) or positively skewed distribution (Fig. S4 C) the performance curves for pairs of stimuli below and above the diagonal show different concavity. For a distribution that is bimodal, the performance as a function of *s*_{1} resembles a saddle, with equal performance for intermediate values of *s*_{1} (Fig. S4 D). These results indicate that although the performance is quantitatively shaped by the form of the stimulus distribution, it persists as a monotonic function of *s*_{1} under a wide variety of manipulations of the distributions. This is a result of the property of the cumulative function, and may underlie the ubiquity of contraction bias under different experimental conditions.

We compare the predictions from our simple statistical model to the Bayesian model in [38], outlined in Sec. 4.3. We compute the predicted performance of an ideal Bayesian decision maker, using a value of the uncertainty in the representation of the first stimulus (*σ* = 0.12) that yields the best fit with the performance of the statistical model (where the free parameter is *ϵ* = 0.5, Fig. S4 A, B, C, and D, second panels). Our model makes different predictions across all types of distributions from that of the Bayesian model. Across all of the distributions (used as priors, in the Bayesian model), the main difference is that of a monotonic dependence of performance as a function of *s*_{1} for our model (Fig. S4 A, B, C, and D, second panels). The biggest difference can be seen with a prior in which pairs of stimuli with extreme values are much more probable than middle-range values. Indeed, in the case of a bimodal prior, for pairs of stimuli where our model would predict a worse-than-average performance (Fig. S4 D, third panel), the Bayesian model predicts a very good performance (Fig. S4 D, fourth panel).

Do human subjects perform as predicted by our model (Fig. 5 A)? We tested 34 human subjects on the auditory modality of the task. The experimental protocol was identical to the one used in Ref.[7]. Briefly, participants were presented with two sounds separated by a delay interval that varied across trials (randomly selected from 2, 4 and 6 seconds). After the second sound, participants were required to decide which sound was louder by pressing the appropriate key. We tested two groups of participants on two stimulus distributions: a negatively skewed and a bimodal distribution (Fig. 5 A, see Sect. 3.3 for more details). Participants performed the task with a mean accuracy of approximately 75%, across stimulus distribution groups and across pairs of stimuli (Fig. 5 B). The experimental data was compatible with the predictions of our model. First, for the negatively skewed stimulus distribution condition, we observe a shift of the point of equal performance to the right, relative to a symmetric distribution (Fig. 5 C, left panel). For the bimodal condition, such a shift is not observed, as predicted by our model (Fig. 5 C, right panel). Second, the monotonic behavior of the performance, as a function of *s*_{1} also holds, across both distributions (Fig. 5 C). Our model provides a simple explanation: the percent correct on any given pair is given by the probability that, given a shift in the working memory representation, this representation still does not affect the outcome of the trial (Fig. 4 C). This probability, is given by cumulative of the probability distribution of working memory representations, for which we assume the marginal distribution of the stimuli to be a good approximation (Fig. 4 B). As a result, performance is a monotonic function of *s*_{1}, independent of the shape of the distribution, while the same does not always hold true for the Bayesian model (Fig. 5 C).

We further fit the performance of each participant, using both our statistical model and the Bayesian model, by minimizing the mean squared error loss (MSE) between the empirical curve and the model, with *ϵ* and *σ* as free parameters (Fig. 5 C), respectively (for the Bayesian model, we used the marginal distribution of the stimuli *p _{m}* as the prior). Across participants in both distributions, our statistical model yielded a better fit of the performance, relative to the Bayesian model (Fig. 5 D, left panel). We further fit the mean performance across all participants within a given distribution group, and similarly found that the statistical model yields a better fit, using the MSE as a goodness of fit metric (Fig. 5 D, right panel).

Finally, in order to better understand the parameters that affect the occurrence of errors in human participants, we analyzed the fraction of trials in which subjects responded *s*_{1} *< s*_{2}, conditioned on the specific pair of stimuli presented in the current and the previous trial (Fig. 6 A right panel). Compatible with the previous results [7], we found attractive history effects, across both stimulus distributions (Fig. 6 A left panel, and B), explaining the origin of errors. The magnitude of the attractive bias was modulated by the delay interval - the larger the delay, the larger the bias (Fig. 6 A left panel and B), leading also to a larger contraction bias for larger intervals (Fig. 6 C and D).

### 1.5.2 A prolonged inter-trial interval improves average performance and reduces attractive bias

If errors are due to the persistence of activity resulting from previous trials, what then, is the effect of the inter-trial interval (ITI)? In our model, a shorter ITI (relative to the default value of 5*s* used in Figs. 2 and 3) results in a worse performance and vice versa (Fig. 7 A, B, C). This change in performance is reflected in reduced biases toward the previous trial (Fig. 7 D and E). A prolonged ITI allows for a drifting bump to vanish due to the effect of adaptation: as a result, the performance improves with increasing ITI and conversely, worsens with a shorter ITI.

### 1.5.3 Working memory is attracted towards short-term and repelled from long-term sensory history

Although contraction bias is robustly found in different contexts, surprisingly similar tasks, such as perceptual estimation tasks, sometimes highlight opposite effects, i.e. repulsive effects [41–43]. Interestingly, recent studies have found both effects in the same experiment: in a study of visual orientation estimation [18], it has been found that attraction and repulsion have different timescales; while perceptual decisions about orientation are attracted towards recently perceived stimuli (timescale of a few seconds), they are repelled from stimuli that are shown further back in time (timescale of a few minutes). Moreover, in the same study, they find that the long-term repulsive bias is spatially specific, in line with sensory adaptation [44–46] and in contrast to short-term attractive serial dependence [18]. Given that adaptation is a main feature of our model of the PPC, we sought to determine whether such repulsive effects can emerge from the model. We extended the calculation of the bias to up to ten trials back, and quantified the slope of the bias as a function of the previous trial stimulus pair. We observe robust repulsive effects appear after the third trial back in history, and up to six trials back (Fig. 7 F). In our model, both short-term attractive effects and longer-term repulsive effects can be attributed to the multiple timescales over which the networks operate. The short-term attractive effects occur due to the long time it takes for the adaptive threshold to build up in the PPC, and the short timescale with which the WM network integrates input from the PPC. The longer-term repulsive effects occur when the activity bump in the PPC persists in one location and causes adaptation to slowly build up, effectively increasing the activation threshold. The raised threshold takes equally long to return to baseline, preventing activity bumps to form in that location and thereby creating repulsion toward all the other locations in the network. Crucially, however, the amplitude of such effects depend on the inter-trial interval; in particular, for shorter inter-trial intervals, the repulsive effects are less observable.

## 1.6 The timescale of adaptation in the PPC network can control perceptual biases similar to those observed in dyslexia and autism

In a recent study [16], a similar PWM task with auditory stimuli was studied in human neurotypical (NT), autistic spectrum (ASD) and dyslexic (DYS) subjects. Based on an analysis using a generalized linear model (GLM), a double dissociation between different subject groups was suggested: ASD subjects exhibit a stronger bias towards long-term statistics – compared to NT subjects –, while for DYS subjects, a higher bias is present towards short-term statistics.

We investigated our model to see if it is able to show similar phenomenology, and if so, what are the relevant parameters controlling the timescale of the biases in behavior. We identified the adaptation timescale in the PPC as the parameter that affects the extent of the short-term bias, consistent with previous literature [47], [48]. Calculating the mean bias towards the previous trial stimulus pair (Fig. 8 A), we find that a shorter-than-NT adaptation timescale yields a larger bias towards the previous trial stimulus. Indeed, a shorter timescale for neuronal adaptation implies a faster process for the extinction of the bump in PPC – and the formation of a new bump that remains stable for a few trials – producing a “jumpier” dynamics that leads to a larger number of one-trial back errors. In contrast, increasing this timescale with respect to NT gives rise to a stable bump for a longer time, ultimately yielding a smaller short-term bias. This can be seen in the detailed breakdown of the network’s behavior on the current trial, when conditioned on the stimuli presented at the previous trial (Fig. 8 B, see also Sect. 1.3 for a more detailed explanation of the dynamics). We performed a GLM analysis as in Ref. [16] to the network behavior, with stimuli from four trials back and the mean stimulus as regressors (see Sect. 4.4). This analysis shows that a reduction in the PPC adaptation timescale with respect to NT, produces behavioral changes qualitatively compatible with data from DYS subjects; on the contrary, an increase of this timescale yields results consistent with ASD data (Fig. 8 C).

This GLM analysis suggests that dissociable short- and long-term biases may be present in the network behavior. Having access to the full dynamics of the network, we sought to determine how it translates into such dissociable short- and long-term biases. Given that all the behavior arises from the location of the bump on the attractor, we quantified the fraction of trials in which the bump in the WM network, before the onset of the second stimulus, was present in the vicinity of any of the previous trial’s stimuli (Fig. S6 B, right panel, and C), as well as the vicinity of the mean over the sensory history (Fig. S6 B, left panel, and C). While the bump location correlated well with the GLM weights corresponding to the previous trial’s stimuli regressor (comparing the right panels of Fig. S6 A and B), surprisingly, it did not correlate with the GLM weights corresponding to the mean stimulus regressor (comparing the left panels of Fig. S6 A and B). In fact, we found that the bump was in a location given by the stimuli of the past two trials, as well as the mean over the stimulus history, in a smaller fraction of trials, as the adaptation timescale parameter was made larger (Fig. S6 C).

Given that the weights, after four trials in the past, were still non-zero, we extended the GLM regression by including a larger number of past stimuli as regressors. We found that doing this greatly reduced the weight of the mean stimulus regressor (Fig. 8 C, D and E, see Sect. 4.4 and 4.4 for more details). Therefore, we propose an alternative interpretation of the GLM results given in Ref. [16]. In our model, the increased (reduced) weight for long-term mean in the ASD (DYS) subjects can be explained as an effect of a larger (smaller) window in time of short-term biases, without invoking a double dissociation mechanism (Fig. 8 D and E). In Sect. 4.4, we provide a mathematical argument for this, which is empirically shown by including a large number of individual stimuli from previous trials in the regression analysis.

# 2 Discussion

## 2.1 Contraction bias in the delayed comparison task: simply a statistical effect or more?

Contraction bias is an effect emerging in working memory tasks, where in the averaged behavior of a subject, the magnitude of the item held in memory appears to be larger than it actually is when it is “small” and, vice-versa, it appears to be smaller when it is “large” [49, 3, 4, 50, 19, 51, 52]. Recently, Akrami et al [7] have found that contraction bias as well as short-term history-dependent effects occur in an auditory delayed comparison task in rats and humans: the comparison performance in given trial, depends on the stimuli shown in preceding trials (up to three trials back) [7], similar to previous findings in human 2AFC paradigms [5]. These findings raise the question: does contraction bias occur independently of short-term history effects, or does it emerge as a result of the latter?

Akrami et al [7] have also found the PPC to be a critical node for the generation of such effects, as its optogenetic inactivation (specifically during the delay interval) greatly attenuated both effects. WM was found to remain intact, suggesting that its content was perhaps read-out in another region. Electrophysiological recordings as well as optogenetic inactivation results in the same study suggest that while sensory history information is provided by the PPC, its integration with the WM content must happen somewhere downstream to the PPC. Different brain areas can fit the profile. For instance there are known projections from the PPC to mPFC in rats [53], where neural correlates of parametric working memory have been found [37]. Building on these findings, we suggest a minimal two-module model aimed at better understanding the interaction between contraction bias and short-term history effects. These two modules capture properties of the PPC (in providing sensory history signals) and a downstream network holding working memory content. Our WM and PPC networks, despite having different timescales, are both shown to encode information about the marginal distribution of the stimuli (Fig. 4 B). Although they have similar activity distributions to that of the external stimuli, they have different memory properties, due to the different timescales with which they process incoming stimuli. The putative WM network, from which information to solve the task is read-out, receives additional input from the PPC network. The PPC is modelled as integrating inputs slower relative to the WM network, and is also endowed with firing rate adaptation, the dynamics of which yield short-term history biases and consequently, contraction bias.

It must be noted, however, that short-term history effects do not necessarily need to be invoked in order to recover contraction bias: as long as errors are made following random samples from a distribution in the same range as that of the stimuli, contraction bias should be observed [54]. Indeed, when we manipulated the parameters of the PPC network in such a way that short-term history effects were eliminated (by removing the firing-rate adaptation), contraction bias persisted. As a result, our model suggests that contraction bias may not simply be given by a regression towards the mean of the stimuli during the inter-stimulus interval [55, 56], but brought about by a richer dynamics occurring at the level of trials [2], more in line with the idea of random sampling [57]. The model makes predictions as to how the pattern of errors may change when the distribution of stimuli is manipulated, either at the level of the presented stimuli or through the network dynamics. When we tested these predictions experimentally, by manipulating the skewness of the stimulus distribution, such that the median and the mean were dissociated (Fig. 5 A) the results from our human psychophysics experiments were in agreement with the model predictions. In further support of this, in a recent tactile categorization study [43], where rats were trained to categorize tactile stimuli according to a boundary set by the experimenter, the authors have shown that rats set their decision boundary according to the statistical structure of the stimulus set to which they are exposed. More studies are needed to fully verify the extent to which the statistical structure of the stimuli affect the performance. Finally, we note that in our model, the stimulus distribution is not explicitly learned (but see [58]): instead, the PPC dynamics follows the input, and its marginal distribution of activity is similar to that of the external input. Support for this idea comes from Ref. [43], where the authors used different stimulus ranges across different sessions and noted that rats initiated each session without any residual influence of the previous session’s range/boundary on the current session, ruling out long-term learning of the input structure.

## 2.2 Attractor mechanism riding on multiple timescales

Our model assumes that the stimulus is held in working memory through the persistent activity of neurons, building on the discovery of persistent selective activity in a number of cortical areas, including the prefrontal cortex (PFC), during the delay interval [59–65]. To explain this finding, we have used the attractor framework, in which recurrently connected neurons mutually excite one another to form reverberation of activity within populations of neurons coding for a given stimulus [66–68]. However, subsequent work has shown that persistent activity related to the stimulus is not always present during the delay period and that the activity of neurons displays far more heterogeneity than previously thought [69]. It has been proposed that short-term synaptic facilitation may dynamically operate to bring a WM network across a phase transition from a silent to a persistently active state [70, 71]. Such mechanisms may further contribute to short-term biases [72], an alternative possibility that we have not specifically considered in this model.

An important model feature that is crucial in giving rise to all of its behavioral effects is its operation over multiple timescales. Such timescales have been found to govern the processing of information in different areas of the cortex [30–32], and may reflect the heterogeneity of connections across different cortical areas [73].

## 2.3 Relation to other models

In many early studies, groups of neurons whose activity correlates monotonically with the stimulus feature, known as “plus” and “minus” neurons, have been found in the PFC [63, 74]. Such neurons have been used as the starting point in the construction of many models [75, 76, 69, 77]. It is important, however, to note that depending on the area, the fraction of such neurons can be small [37], and that the majority of neurons exhibit firing profiles that vary largely during the delay period [78]. Such heterogeneity of the PFC neurons’ temporal firing profiles have prompted the successful construction of models that have not included the basic assumption of plus and minus neurons, but these have largely focused on the plausibility of the dynamics of neurons observed, with little connection to behavior [69].

A separate line of research has addressed behavior, by focusing on normative models to account for contraction bias [19, 5, 57, 79]. The abstract mathematical model that we present (Fig. 4), can be compatible with a Bayesian framework [19] in the limit of a very broad likelihood for the first stimulus and a very narrow one for the second stimulus, and where the prior for the first stimulus is replaced by the distribution of *ŝ*, following the model in Fig. 4 A (see Sect. 4.2 for details). However, it is important to note that our model is conceptually different, i.e. subjects do not have access to the full prior distribution, but only to *samples* of the prior. We show that having full knowledge of the underlying sensory distribution is not needed to present contraction bias effects. Instead, a point estimate of past events that is updated trial to trial suffices to show similar results. This suggests a possible mechanism for the brain to approximate Bayesian inference and it remains open whether similar mechanisms (based on interaction of networks with different integration timescales) can approximate other Bayesian computations. It is also important to note the differences between the predictions from the two models. As shown in Fig. 4 depending on the specific sensory distributions, the two models can have qualitatively different testable predictions. Data from our human psychophysical experiments, utilizing auditory Parametric Working Memory, show better agreement with our model predictions as compared to the Bayesian model.

Moreover, an ideal Bayesian observer model alone cannot capture the temporal pattern of short-term attraction and long-term repulsion observed in some tasks, and the model has had to be supplemented with efficient encoding and Bayesian decoding of information in order to capture both effects [18]. In our model, both effects emerge naturally as a result of neuronal adaptation, but their amplitudes crucially depend on the time parameters of the task, perhaps explaining the sometimes contradictory effects reported across different tasks.

Finally, while such attractive and repulsive effects in performance may be suboptimal in the context of a task designed in a laboratory setting, this may not be the case in more natural environments. For example, it has been suggested that integrating information over time serves to preserve perceptual continuity in the presence of noisy and discontinuous inputs [6]. This continuity of perception may be necessary to solve more complex tasks or make decisions, particularly in a non-stationary environment, or in a noisy environment.

# 3 Methods

## 3.1 The model

Our model is composed of two populations of *N* neurons, representing the PPC network and the putative WM network. We consider that each population is organized as a continuous line attractor, with recurrent connectivity described by an interaction matrix *J _{ij}*, whose entries represent the strength of the interaction between neuron

*i*and

*j*. The activation function of the neurons is a logistic function, i.e. the output

*r*of neuron

_{i}*i*, given the input

*h*, is

_{i}
where *β* is the neuronal gain. The variables *r _{i}* take continuous values between 0 and 1, and represent the firing rates of the neurons. The input

*h*to a neuron is given by

_{i}
where *τ* is the timescale for the integration of inputs. In the first term on the right hand side, *J _{ij}r_{j}* represents the input to neuron

*i*from neuron

*j*, and

*I*

^{ext}

_{i}corresponds to the external inputs. The recurrent connections are given by

with

The interaction kernel, *K*, is assumed to be the result of a time-averaged Hebbian plasticity rule: neurons with nearby firing fields will fire concurrently and strengthen their connections, while firing fields far apart will produce weak interactions [80]. Neuron *i* is associated with the firing field *x _{i}* =

*i/N*. The form of

*K*expresses a connectivity between neurons

*i*and

*j*that is exponentially decreasing with the distance between their respective firing fields, proportional to

*|i − j|*; the exponential rate of decrease is set by the constant

*d*

_{0}, i.e. the typical range of interaction. This constant also sets the amplitude of the kernel, in such a way that its integral over

*x*equals one. The strength of the excitatory weights is set by

*J*; the normalization of

_{e}*K*, together with the sigmoid activation function saturating to 1, implies that

*J*is also the maximum possible input received by any neuron due to the recurrent connections. The constant

_{e}*J*

_{0}, instead, contributes to a linear global inhibition term. Its value needs to be chosen depending on

*J*and

_{e}*d*

_{0}, so that the balance between excitatory and inhibitory inputs ensures that the activity remains localized along the attractor, i.e. it does not either vanish or equal 1 everywhere; together, these three constants set the width of the bump of activity.

The two networks in our model are coupled through excitatory connections from the PPC to the WM network. Therefore, we introduce two equations analogous to Eq. (2), one for each network. The coupling between the two will enter as a firing-rate dependent input, in addition to *I*^{ext}. The dynamics of the input to a neuron in the WM network writes

where is the timescale for the integration of inputs in the WM network. The first term in the r.h.s corresponds to inputs from recurrent connections within the WM network. The second term, corresponds to inputs from the PPC network. Finally, the last term corresponds to the external inputs used to give stimuli to the network. Similarly, for the PPC network we have

where is the timescale for the integration of inputs in the PPC network; importantly, we set this to be longer than the analogous quantity for the WM network, (see Tab. 1). The first and third terms in the r.h.s are analogous to the corresponding ones for the WM network: inputs from within the network and from the stimuli. The second term instead, corresponds to adaptive thresholds with a dynamics specified by

modelling neuronal adaptation, where and *D ^{P}* set its timescale and its amplitude. We are interested in the condition where the timescale of the evolution of the input current is much smaller relative to that of the adaptation . For a constant , we find that depending on the value of

*D*, the bump of activity shows different behaviors. For low values of

^{P}*D*, the bump remains relatively stable (Fig. S1 C (1)). Upon increasing

^{P}*D*, the bump gradually starts to drift (Fig. S1 C (2-3)). Upon increasing

^{P}*D*even further, a phase transition leads to an abrupt dissipation of the bump (Fig. S1 C (4)).

^{P}Note that, while the transition from bump stability to drift occurs gradually, the transition from drift to dissipation is abrupt. This abruptness in the transition from the drift to the dissipation regime may imply that only one of the two behaviors is possible in our model of the PPC (Sect. 1.3). In fact, our network model of the PPC operates in the “drift” regime (, *D ^{P}* = 0.3). However, we also observe dissipation of the bump. This occurs due to the inputs from incoming external stimuli, that affect the bump via the global inhibition in the model (Fig. S1 A). Therefore external stimuli can allow the network to temporarily cross the sharp drift/dissipation boundary shown in Fig. S1 B. As a result, the combined effect of adaptation, together with external inputs and global inhibition result in the drift/jump dynamics described in the main text.

Finally, both networks have a linear geometry with free boundary conditions, i.e. no condition is imposed on the profile activity at neuron 1 or *N*.

## 3.2 Simulation

We performed all the simulations using custom Python code. Differential equations were numerically integrated with a time step of *dt* = 0.001 using the forward Euler method. The activity of neurons in both circuits were initialized to *r* = 0. Each stimulus was presented for 400 ms. A stimulus is introduced as a “box” of unit amplitude and of width 2 *δs* around *s* in stimulus space: in a network with *N* neurons, the stimulus is given by setting in Eq. 5 for neurons with index *i* within (*s δs*) *N*, and for all the others. Only the activity in the WM network was used to assess performance. To do that, the activity vector was recorded at two time-points: 200 ms before and after the onset of the second stimulus *s*_{2}. Then, the neurons with the maximal activity were identified at both time-points, and compared to make a decision. This procedure was done for 50 different simulations with 1000 consecutive trials in each, with a fixed inter-trial interval separating two consecutive trials, fixed to 5 seconds. The inter-stimulus intervals were set according to two different experimental designs, as explained below.

### 3.2.1 Interleaved design

As in the study in Ref. [7], an inter-stimulus interval of either 2, 6 or 10 seconds was randomly selected. The delay interval is defined as the time elapsed from the end of the first stimulus to the beginning of the second stimulus. This procedure was used to produce Figs. 1, 2, 3, 6, S2, S3.

### 3.2.2 Block design

In order to provide a comparison to the interleaved design, but also to simulate the design in Ref. [16], we also ran simulations with a block design, where the inter-stimulus intervals were kept fixed throughout the trials. Other than this, the procedure and parameters used were exactly the same as in the interleaved case. This procedure was used to produce Figs. 8 and S6.

## 3.3 Human auditory experiment - delayed comparison task

Subjects received, in each trial, a pair of sounds played from ear-surrounding headphones. The subject self-initiated each trial by pressing the space bar on the keyboard. The first sound was then presented together with a blue square on the left side of a computer monitor in front of the subject. This was followed by a delay period, indicated by ‘WAIT!’ on the screen, then the second sound was presented together with a red square on the right side of the screen. At the end of the second stimulus, subjects had 2 seconds to decide which one was louder, then indicate their choice by pressing the ‘s’ key if they thought that the first sound was louder, or the ‘l’ key if they thought that the second sound was louder. Written feedback about the correctness of their response was provided on the screen for each individual trial. Every ten trials, participants received feedback on their running mean performance calculated up to that trial. Participants then had to press spacebar to go to the next trial (the experiment was hence self-paced).

The two auditory stimuli, *s*_{1} and *s*_{2}, separated by a variable delay (of 2, 4 and 6 seconds), were played for 400 ms, with short delay periods of 250 ms inserted before *s*_{1} and after *s*_{2}. The stimuli consisted of broadband noise 2000-20000 Hz, generated as a series of sound pressure level (SPL) values sampled from a zero-mean normal distribution. The overall mean intensity of sounds varied from 60-92 dB. Participants had to judge which out of the two stimuli, *s*_{1} and *s*_{2}, was louder (had the greater SPL standard deviation).

We recruited 10 subjects for the negatively skewed distribution and 24 subjects for the bimodal distribution. The study was approved by the University College London (UCL) Research Ethics Committee [16159/001] (London, UK). Each participant performed approximately 400 trials for a given distribution. Several participants took part in both distributions.

# 4 Supplementary Material

## 4.1 Computing bump location

In order to check whether the bump is in a target location (Figs. 3 B, S2 B, and S3 D), we check whether the position of the neuron with the maximal firing rate is within a distance of ±5% of the length of the whole line attractor from the target location (Figs. 3 A, S2 A and S3 C). In these figures, we compare the probability that, in a given trial, the activity of the WM network is localized around one of the previous stimulus (estimated from the simulation of the dynamics, histograms) with the probability of this happening due to chance (horizontal dashed line). Here we detail the calculation of the chance probability. In general, if we have two discrete independent random variables, and *Ŷ*, with probability distributions *p _{X}* and

*p*, the probability of them having the same value is

_{Y}
where *i, j* are the indices for different values of the two random variables and equals 1 where *x _{i}* =

*x*and 0 otherwise. If the two random variables are identically distributed, the above expression writes

_{j}In our case, the two identically distributed random variables are “bump location at the current trial” and the “target bump location” (that are and *⟨s⟩*). With the exception of the mean stimulus *s*, all the other variables are identically distributed, with probability *p _{m}* (that is the marginal distribution over

*s*

_{1}or

*s*

_{2}). We note that the bump location in the WM network follows a very similar distribution to

*p*(Fig. 4 B). Then, we compute the chance probability with the above relationship, where

_{m}*p*. For the mean stimulus, instead, we have a probability which is simply equal to 1 for

_{X}≡ p_{m}*s*= 0.5 and 0 elsewhere; therefore, the chance probability for the bump location to be at the mean stimulus, then is

*p*(0.5).

_{m}The excess probability (with respect to chance) for the bump location to equal one of the previous stimuli gives a measure of the correlation between these two; in other terms, of the amount of information retained by the network about previous stimuli.

## 4.2 The probability to make errors is proportional to the cumulative distribution of the stimuli, giving rise to contraction bias

In order to illustrate the statistical origin of contraction bias consistent with our network model, we consider a simplified mathematical model of its performance (Fig. 4 A). By definition of the delayed comparison task, the optimal decision maker produces a label *y* equal to 1 if , and 0 if ; the impossible cases are excluded from the set of stimuli, but would produce a label which is either 0 or 1 with 50% probability. That is

In this simplified scheme, at each trial *t*, the two stimuli and are perfectly perceived with a finite probability 1 − *ϵ*, with *ϵ* < 1. Under the assumption that the decision maker behaves optimally based on the perceived stimuli, a correct perception would necessarily lead to the correct label. However, with probability *ϵ*, the first stimulus is randomly selected from a buffer of stimuli, i.e. is replaced by a random variable *ŝ*_{1} that has a probability distribution .

The probability distribution is the statistics of previously shown stimuli. The information about the previous stimulus is given by the activity of the “slower” PPC network. As shown above, after the presentation of the first stimulus of the trial, the bump of activity is seen to jump to the position encoding one of the previously presented stimuli, , etc. with decreasing probability (Fig. 3 C). Therefore, in calculating the performance in the task, we can take to be the marginal distribution of the stimulus *s*_{1} or *s*_{2} across trials, as in the histogram (Fig. 4 B).

The probability of a misclassification is then given by the probability that, given the pair , at trial *t*,

1) the first stimulus is replaced by a random value, which happens with probability *ϵ*, and

2) the value of *ŝ*_{1} replaced is larger than when is smaller and viceversa (Fig. 4 C).

In summary, the probability of an error at trial *t* is given by

## 4.3 Bayesian description of contraction bias

We reproduce here the theoretical result from [38], which provides a normative model for contraction bias in the Bayesian inference framework, and apply it to the different stimulus distributions described in Sect. 1.5.1.

A stimulus with value *s* is encoded by the agent through a noisy representation . Before the presentation of the stimulus, the agent has an expectation of its possible values which is described by the probability *π*. Assuming that it has access to the internal representation *r*, as well as the probability distributions *ℓ* and *π*, the agent can infer the perceived stimulus *ŝ* through Bayes rule:

where *p*(*r*) = ∫ *ds′ ℓ*(*r|s′*) *π*(*s′*). In this Bayesian setting, the probability distributions for the noisy representation and for the expected measurement are interpreted as the *likelihood* and the *prior*, respectively.

In the delayed comparison task, at the time of the decision, the two stimuli *s*_{1} and *s*_{2} are assumed to be encoded independently, although with different uncertainties, due to the different delays leading to the time of decision: *ℓ*(*r*_{1}, *r*_{2}|*s*_{1}, *s*_{2}) = *ℓ*_{1}(*r*_{1}|*s*_{1}) *ℓ*_{2}(*r*_{2}|*s*_{2}), with var[*ℓ*_{1}] > var[*ℓ*_{2}]. Similarly, the expected values of the stimuli are assumed to be independent but also identically distributed: *π*(*s*_{1}, *s*_{2}) = *π*(*s*_{1}) *π*(*s*_{2}).

The optimal Bayesian decision maker uses the inference of the stimuli through Eq. (10) to produce an estimate of the probability that *s*_{1} *< s*_{2}, given the internal representations,

where Θ is the Heaviside function, and yields a label *ŷ* = 1 (truth value of “*s*_{1} *< s*_{2}”) when such probability is higher than 1/2, and *ŷ* = 0 otherwise. Therefore, the probability that the Bayesian decision maker yields the response “*s*_{1} *< s*_{2}” given the *true* values of the stimuli *s*_{1} and *s*_{2} is the average of the label *ŷ* over the possible values of their representations, i.e. over the likelihood:

### 4.3.1 Application to our study

In modelling our data, we assume that the likelihood functions *ℓ*_{1}(·|*s*_{1}) and *ℓ*_{2}(·|*s*_{2}) are Gaussian with mean equal to the stimulus, but with different standard deviations, *σ*_{1} and *σ*_{2}, respectively, as in [38]. We restrict to the particular case where *σ*_{2} = 0, i.e. there is no uncertainty in the representation of the second stimulus, since there is negligible delay between its presentation and the decision. We instead assume a finite standard deviation *σ*_{1} = *σ*, which we use as the only free parameter of this model to produce Figs.S4 A-D, panels 2 and 4.

The prior *π* is chosen to be the marginal distribution of the first stimulus – identical to the marginal of the second stimulus, because of symmetry.

When *σ*_{2} = 0, *ℓ*_{2}(*r*|*s*) = *δ*(*r* − *s*) (Dirac delta), and the predicted response probability, Eq. (13), reduces to

## 4.4 Generalized Linear Model (GLM)

### GLM as in Lieder et al

Similarly to Ref. [16], we performed a multivariate logistic regression (an instance of generalized linear model, GLM) to the output of the network in the delayed discrimination task with recent stimuli values as covariates:

where *σ* is the sigmoidal function *σ*(*z*) = 1/(1 + *e ^{−z}*), is the mean of the stimuli presented at trial

*τ*,

*h*is the number of “history” terms in the regression, and

*s*is the mean of the stimuli within and across trials up to the current one. As in Ref. [16], we choose

*h*= 4, i.e. we include in the short-term history, the four trials prior to the current one. The first term in Eq. (14), with weight

*α*, controls the slope of the psychometric curve. The remaining terms, combined linearly with weights

*w*, contribute to biases expressing the long and short-term memory. In Ref. [16], it is shown that subjects on the autistic syndrome (ASD) conserve the higher long-term weights,

*w*, while losing the short-term weights expressed by neurotypical (NT) subjects. In contrast, dyslexic (DYS) subjects conserve a higher bias from the recent stimuli,

_{mean}*w*

_{1}, while losing the higher long-term weights, also expressed by neurotypical subjects.

In order to gain insight into this regression model in terms of our network, we also performed a linear regression of the bump of activity just before the onset of the second stimulus, denoted , versus the same variables:

In this case, we see that the weights *w* in the linear regression for have the same qualitative behavior as the weights for the bias term in the GLM regression for the performance (not shown). This is expected, since the decision-making rule in the network –based on the bump location just before and during the second stimulus, *ŝ*_{1} and , respectively– is deterministic, following . Therefore, the bias term in the GLM performed in Ref. [16], Eq. (14), corresponds to the displacement of the bump location with respect to the actual stimulus , modelled to be linearly dependent on the displacement of previous stimuli from .

### Regression model with infinite history

In the regression formulas in Eqs. (14) and (15), it is possible to give an interpretation of the parameter *w _{mean}*, that is the weight of the contribution from the covariate corresponding to the mean of the past stimuli. Let us consider two regression models, one in which, in addition to a regressor corresponding to the mean stimulus, regressors corresponding to the stimulus history are included up to trial

*h*, and another in which

*h*= ∞, i.e. infinitely many past stimuli are included as regressors. In this case, Eq. (15) rewrites

If we assume that the weights obtained from the regression have roughly an exponential dependence on time (Fig. 8 C and D), we can write

By equating Eqs. (15) and (16), we would find that

where

that is an average over the geometric distribution *g _{j}* = (1 −

*γ*)

*γ*, from time

^{j}*t*− (

*h*+ 1) backward. Since for

*γ*large enough we have ⟨

*s*⟩

_{γ}= ⟨

*s*⟩, we can identify

This derivation indicates that the magnitude *w _{mean}* in the infinite history model, given by Eq. (15), is a function of the discount factor

*γ*as well as the weight of the first trial left out from the finite-history regression (

*w*

_{h}_{+1}). A higher

*γ*value, i.e. a longer timescale for damping of the weights extending into the stimulus history, yields a higher

*w*. We can obtain

_{mean}*γ*for each condition (NT, ASD and DYS) by fitting the weights obtained as a function of trials extending into the history (Fig. 8 C and D). As predicted by Eq. (20), a larger window for short-term history effects (as in the ASD case relative to NT) yields a larger weight for the covariate corresponding to the mean stimulus. Finally, Eq. (20) also predicts that

*w*is proportional to

_{mean}*w*

_{h}_{+1}, the number of trials back we consider in the regression,

*h*, implying that the number of covariates that we choose to include in the model may greatly affect the results. Both of these predictions are corroborated by plotting directly the value of

*w*obtained from the regression (Fig. 8 E).

_{mean}## 4.5 Supplementary Figures

## 4.6 Parameters

# Acknowledgements

We are grateful to Loreen Hertäg for helpful comments on a previous version of this manuscript, and Arash Fassihi for helpful discussions. This work was supported by BBSRC BB/N013956/1, BB/N019008/1, Wellcome Trust 200790/Z/16/Z, Simons Foundation 564408, EPSRC EP/R035806/1, Gatsby Charitable Foundation 562980 and Wellcome Trust 562763.

# References

- [1]The central tendency of judgment
*The Journal of Philosophy, Psychology and Scientific Methods***7**:461–469 - [2]Contraction bias in memorial quantifying judgment: Does it come from a stable compressed memory representation or a dynamic adaptation process?
*The American journal of psychology*:543–564 - [3]Intensity perception. vii. further data on roving-level discrimination and the resolution and bias edge effects
*The Journal of the Acoustical Society of America***61**:1577–1585 - [4]The time-order error and its relatives: Mirrors of cognitive processes in comparing
*Psychological Bulletin***97** - [5]How recent history affects perception: the normative approach and its heuristic approximation
*PLoS Comput Biol***8** - [6]Serial dependence in visual perception
*Nature neuroscience***17**:738–743 - [7]Posterior parietal cortex represents sensory history and mediates its effects on behaviour
*Nature***554**:368–372 - [8]Serial dependence across perception, attention, and memory
*Trends in Cognitive Sciences***21**:493–497 - [9]Serial dependencies act directly on perception
*Journal of vision***17**:6–6 - [10]Two types of serial dependence in visual working memory
*British Journal of Psychology***110**:256–267 - [11]Eye gaze direction shows a positive serial dependency
*Journal of vision***18**:11–11 - [12]Serial dependence in position occurs at the time of perception
*Psychonomic Bulletin & Review***25**:2245–2253 - [13]The perceived stability of scenes: serial dependence in ensemble representations
*Scientific reports***7**:1–9 - [14]Serial dependence in the perception of visual variance
*Journal of Vision***18**:4–4 - [15]Ghosts in the machine: memory interference from the previous trial
*Journal of neurophysiology***113**:567–577 - [16]Perceptual bias reveals slow-updating in autism and fast-forgetting in dyslexia
*Nature neuroscience***22**:256–264 - [17]Build-up of serial dependence in color working memory
*Scientific reports***10**:1–7 - [18]A bayesian and efficient observer model explains concurrent attractive and repulsive history biases in visual perception
*Elife***9** - [19]Bayesian inference underlies the contraction bias in delayed comparison tasks
*PloS one***6** - [20]Flutter discrimination: neural codes, perception, memory and decision making
*Nature Reviews Neuroscience***4**:203–218 - [21]Continuous attractors and oculomotor control
*Neural Networks***11**:1253–1258 - [22]Synaptic reverberation underlying mnemonic persistent activity
*Trends in neurosciences***24**:455–463 - [23]Nonequilibrium statistical mechanics of continuous attractors
*Neural Computation***32**:1033–1068 - [24]Continuous attractors for dynamic memories
*Elife***10** - [25]Computing with continuous attractors: stability and online aspects
*Neural computation***17**:2215–2239 - [26]Continuous attractor neural networks: candidate of a canonical model for neural information representation
*F1000Research* - [27]Dynamics of neural networks with continuous attractors
*EPL (Europhysics Letters)***84** - [28]A moving bump in a continuous manifold: a comprehensive study of the tracking dynamics of continuous attractor neural networks
*Neural Computation***22**:752–792 - [29]Continuous attractor neural networks
*Recent developments in biologically inspired computing*:398–425 - [30]A hierarchy of intrinsic timescales across primate cortex
*Nature neuroscience***17**:1661–1663 - [31]Survey of spiking in the mouse visual system reveals functional hierarchy
*Nature***592**:86–92 - [32]Neuronal timescales are functionally dynamic and shaped by cortical microarchitecture
*Elife***9** - [33]Discrimination in the sense of flutter: new psychophysical measurements in monkeys
*Journal of Neuroscience***17**:6391–6400 - [34]Discrimination of vibrotactile frequencies in a delayed pair comparison task
*Perception & psychophysics***58**:680–692 - [35]Tactile perception and working memory in rats and humans
*Proceedings of the National Academy of Sciences***111**:2331–2336 - [36]Transformation of perception from sensory to motor cortex
*Current Biology***27**:1585–1596 - [37]Neuronal correlates of tactile working memory in prefrontal and vibrissal somatosensory cortex
*Cell reports***27**:3167–3181 - [38]Dissecting the roles of supervised and unsupervised learning in perceptual discrimination judgments
*Journal of Neuroscience***41**:757–765 - [39]13 modeling feature selectivity in local cortical circuits
- [40]Bump attractor dynamics underlying stimulus integration in perceptual estimation tasks
*BioRxiv* - [41]An adaptation-induced repulsion illusion in tactile spatial perception
*Frontiers in human neuroscience***11** - [42]Opposite effects of recent history on perception and decision
*Current Biology***27**:590–595 - [43]Dynamics of history-dependent perceptual judgment
*Nature communications***12**:1–15 - [44]Motion and tilt aftereffects occur largely in retinal, not in object, coordinates in the ternus–pikler display
*Journal of Vision***11**:7–7 - [45]The reference frame of the tilt aftereffect
*Journal of Vision***10**:8–8 - [46]A reinvestigation of the reference frame of the tilt-adaptation aftereffect
*Scientific reports***3**:1–7 - [47]Shorter cortical adaptation in dyslexia is broadly distributed in the superior temporal lobe and includes the primary auditory cortex
*ELife***7** - [48]Dyslexics’ faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation
*Elife***6** - [49]8 memory psychophysics: An examination of its perceptual and cognitive prospects
*Advances in psychology*:441–513 - [50]Bias in quantifying judgements
- [51]Prior information biases stimulus representations during vibrotactile decision making
*Journal of Cognitive Neuroscience***22**:875–887 - [52]The central tendency bias in color perception: Effects of internal and external noise
*Journal of vision***14**:5–5 - [53]Organization of posterior parietal–frontal connections in the rat
*Frontiers in systems neuroscience* - [54]A tale of two literatures: A fidelity-based integration account of central tendency bias and serial dependency
*Computational Brain & Behavior*:1–21 - [55]The influence of prior experience and expected timing on vibrotactile discrimination
*Frontiers in neuroscience***7** - [56]Memory psychophysics for visual area and length
*Memory & Cognition***6**:327–335 - [57]Suboptimality in perceptual decision making
*Behavioral and Brain Sciences***41** - [58]Long-and short-term history effects in a spiking network model of statistical learning
*bioRxiv* - [59]Neuron activity related to short-term memory
*Science***173**:652–654 - [60]Neuronal correlate of pictorial short-term memory in the primate temporal cortex
*Nature***331**:68–70 - [61]Mnemonic coding of visual space in the monkey’s dorsolateral prefrontal cortex
*Journal of neurophysiology***61**:331–349 - [62]Visuospatial coding in primate prefrontal neurons revealed by oculomotor paradigms
*Journal of neurophysiology***63**:814–831 - [63]Neuronal correlates of parametric working memory in the prefrontal cortex
*Nature***399**:470–473 - [64]Periodicity and firing rate as candidate neural codes for the frequency of vibrotactile stimuli
*Journal of neuroscience***20**:5503–5515 - [65]Active information maintenance in working memory by a sensory cortex
*Elife***8** - [66]Neural networks and physical systems with emergent collective computational abilities
*Proceedings of the national academy of sciences***79**:2554–2558 - [67]Modeling brain function: The world of attractor neural networks
- [68]Stable and rapid recurrent processing in realistic autoassociative memories
*Neural Computation***10**:431–450 - [69]From fixed points to chaos: three models of delayed discrimination
*Progress in neurobiology***103**:214–222 - [70]Synaptic theory of working memory
*Science***319**:1543–1546 - [71]Persistent activity in neural networks with dynamic synapses
*PLoS Comput Biol***3** - [72]Interplay between persistent activity and activity-silent dynamics in the prefrontal cortex underlies serial biases in working memory
*Nature neuroscience***23**:1016–1024 - [73]A reservoir of timescales in random neural networks
*bioRxiv* - [74]Neuronal population coding of parametric working memory
*Journal of Neuroscience***30**:9424–9430 - [75]A recurrent network model of somatosensory parametric working memory in the prefrontal cortex
*Cerebral Cortex***13**:1208–1218 - [76]Flexible control of mutual inhibition: a neural model of two-interval discrimination
*Science***307**:1121–1124 - [77]Working models of working memory
*Current opinion in neurobiology***25**:20–24 - [78]Functional, but not anatomical, separation of “what” and “when” in prefrontal cortex
*Journal of Neuroscience***30**:350–360 - [79]Prior and prejudice
*Nature neuroscience***14**:943–945 - [80]How many neurons are sufficient for perception of cortical activity?
*Elife***9**