# Abstract

Biological and living systems process information across spatiotemporal scales, exhibiting the hallmark ability to constantly modulate their behavior to ever-changing and complex environments. In the presence of repeated stimuli, a distinctive response is the progressive reduction of the activity at both sensory and molecular levels, known as habituation. Here, we solve a minimal microscopic model devoid of biological details to show that habituation is driven by negative feedback provided by a slow storage mechanism. Crucially, an intermediate level of habituation is associated with a steep increase in the information that the system collects on the external input over time. We find that the region characterized both by maximal information gain and by the onset of habituation can be retrieved if the system tunes its parameters to minimize dissipation and maximize information at the same time. We test our dynamical predictions against experimentally recorded neural responses in a zebrafish larva subjected to repeated looming stimulation. Our work makes a fundamental step towards uncovering the core mechanisms that shape habituation in biological systems, elucidating its information-theoretic and functional role.

Sensing, adaptation, and habituation mechanisms in biological systems span a wide range of temporal and spatial scales, from cellular to multi-cellular level, forming the basis for decision-making and the optimization of limited resources [1–8]. Prominent examples include the modulation of flagellar motion operated by bacteria according to changes in the local nutrient concentration [9–11], the regulation of immune responses through feedback mechanisms [12, 13], the progressive reduction of neural activity in response to repeated looming stimulation [14, 15], and the maintenance of high sensitivity in varying environments for olfactory or visual sensing in mammalian neurons [16–20].

In the last decade, advances in experimental techniques fostered the quest for the core biochemical mechanisms governing information processing. Simultaneous recordings of hundreds of biological signals made it possible to infer distinctive features directly from data [21–24]. However, many of these approaches fall short of describing the connection between the underlying chemical processes and the observed behaviors [25–28]. To fill this gap, several works focused on the architecture of specific signaling networks, from tumor necrosis factor [12, 13] to chemotaxis [9, 29], highlighting the essential structural ingredients for their efficient functioning. An observation shared by most of these studies is the key role of a negative feedback mechanism to induce emergent adaptive responses [30–33]. Moreover, any information-processing system, biological or not, must obey information-thermodynamic laws that prescribe the necessity of a storage mechanism [34]. This is an unavoidable feature of numerous chemical signaling networks [9, 30] and biochemical realizations of Maxwell Demons [35, 36]. The storage of information consumes energy during processing [37, 38], and thus general sensing mechanisms have to take place out-ofequilibrium [3, 39–41]. Recently, the discovery of memory molecules [42–44] hinted at the implementation of storing mechanisms directly at the molecular scale. Overall, negative feedback, storage, and out-of-equilibrium conditions seem to be necessary requirements for a system to process environmental information and act accordingly. To quantify the performance of a biological informationprocessing system, theoretical developments made substantial progress in highlighting thermodynamics limitations and advantages [16, 45, 46], making a step towards linking information and dissipation from a molecular perspective [35, 47, 48].

Here, we consider an archetypal model for sensing that encapsulates all these key ingredients, i.e., negative feedback, storage, and energy dissipation, and study its response to repeated stimuli. Indeed, in the presence of dynamic environments, it is common for a biological system to keep encountering the same stimulus. Under these conditions, a progressive decay in the amplitude of the response is often observed, both at sensory and molecular levels. In general terms, such adaptive behavior is usually named *habituation* and it is a common phenomenon, from biochemical concentrations [49–51] to populations of neurons [14, 15, 52, 53]. In particular, habituation characterizes many neuronal circuits along the sensory-motor processing pathways in most living organisms, either invertebrates or vertebrates [52, 53]. While it has been proposed that inhibitory feedback mechanisms modulate the stimulus weight [15, 54], there are different hypotheses about the actual functional role of habituation in regulating the information flow, optimal processing, and sensitivity calibration [55], and controlling behavior and prediction during complex tasks [56–58]. Despite its ubiquity, the onset of habituation from general microscopic models remains unexplored, along with its functional advantages in terms of information gain and energy dissipation.

In this work, we tackle these questions. Our architecture is inspired by those found in real biological systems operating at different scales [12, 16] and resembles the topologies of minimal networks exhibiting adaptive features in different contexts [49, 50, 59]. By deriving the exact solution of this prototypical model, we identify that the key mechanism driving habituation is the negative feedback provided by slow information storage. We find that the information gain over time peaks at intermediate levels of habituation, uncovering that optimal processing performances are not necessarily tangled with maximal activity reduction. This optimal region can be retrieved by simultaneously minimizing dissipation and maximizing information, hinting at an a priori optimal region of operation for biological systems. Our results open the avenue to understanding the emergence of habituation, along with its information-theoretic advantage.

# Results

## Archetypal model for sensing in biological systems

We describe a model with three fundamental units: a receptor (R), a readout population (U), and a storage population (S) (Figure 1a). The presence of these three distinct components is a feature shared by several topologies exhibiting adaptive responses of various kinds [29, 49, 50, 59]. The role of the receptor is to sense external inputs, which we represent as a time-varying environment (*H*) described by the probability distribution *p _{H}(h, t)*. The receptor can be either active (

*r*= 1) or passive (

*r*= 0), with the two states separated by an energetic barrier, Δ

*E*. A strong external signal favors activation of the receptor, while inhibition takes place through a negative feedback process mediated by the concentration of the storage, [

*S*]. The negative feedback acts to reduce the level of activity of the system, and its effect on the receptor resembles known motifs found in biochemical systems (see Figure 1b-e) [12, 16]. We model the activation of the receptor by the environmental signal through a “sensing pathway” (superscript

*H*). Instead, the inhibition mechanism affects an “internal pathway” of reactions (superscript

*I*). By assuming that the rates follow an effective Arrhenius’ law, we end up with:

where sets the timescale of the receptor. For simplicity, the driving induced by inhibition appearing in depends linearly on the concentration of S at a given time through a proportionality constant κ. Here, the inverse temperature *β* encodes the thermal noise, as lower values of *β* are associated with faster reactions (see Methods for a detailed discussion on model parameters). Crucially, the presence of two different transition pathways, motivated by molecular considerations and pivotal in many energy-consuming biochemical systems [35, 60, 61], creates an internal non-equilibrium cycle in receptor dynamics.

Whenever active, the receptor drives the production of the readout population *U*, which represents the direct response of the system to environmental signals. As such, we consider it to be the observable that characterizes habituation. It may describe, for example, photo-receptors or calcium concentration for olfactory or visual sensing mechanisms [14, 15, 17–20, 55]. We model its dynamics with a stochastic birth-and-death process:

where *u* denotes the number of molecules, sets the timescale of readout production, and V is the energy needed to produce a readout unit. When the receptor is active, *r* = 1, this effective energetic cost is reduced by *c* by an effective additional driving. Thus, active receptors transduce the environmental energy into an active pumping on the readout node, allowing readout units to encode information on the external signal.

Finally, readout units stimulate the production of the storage population *S*. Its number of molecules s follows a controlled birth-and-death process [62–64]:

where *σ* is the energetic cost of a storage unit and sets the timescale, i.e., . For simplicity, we set a first-order catalytic form for Γ_{s→s+1} (*u*) and allow for a maximum number of storage units, *N _{S}*, so that [

*S*] =

*s/N*. The storage may represent different molecular mechanisms at a coarse-grained level as, for example, memory molecules sensitive to calcium activity [42], synaptic depotentiation, and neural populations that regulate neuronal response [14, 15]. Storage units, as we will see, are responsible for encoding the readout response and play the role of a finite-time memory.

_{S}Our model, being devoid of specific biological details, can be declined to describe systems at very different scales (Figure 1b-d). We do not expect any detailed biochemical implementation to qualitatively change our results. However, we expect from previous studies [64] that the presence of multiple timescales in the system will be fundamental in shaping information between the different components. Thus, we employ the biologically plausible assumption that *U* undergoes the fastest evolution, while *S* and *H* are the slowest degrees of freedom [29, 65]. We have that *τ _{u}* ≪

*τ*≪

_{R}*τ*≈

_{S}*τ*, where

_{H}*TH*is the timescale of the environment.

## The onset of habituation and its functional role

Habituation occurs when the response of the system, represented by the number of active readout units, decreases upon repeated stimulation. In our architecture, we expect it to emerge due to the increase in the storage population, which in turn provides an increasing negative feedback to the receptor. To study the onset and the features of habituation, we consider a switching exponential signal, *pH (h,t) ~ exp [−h/ 〈H〉 (t)*]. The time-dependent average 〈H〉 periodically switches between two values, 〈H〉_{min} and 〈H〉_{max}, corresponding to a vanishing signal and strong stimulation of the receptor, respectively. Overall, the system dynamics is governed by four different operators, *Ŵ _{X}*, with

*X*=

*R, U, S, H*, one for each population and one for the environment. The resulting master equation is:

where *P* denotes, in general, the joint propagator *P (u, r, s, h, t*|*u*_{0}, *r*_{0}, *s*_{0}, *h*_{0}, *t*_{0}), with *u*_{0}, *r*_{0}, *s*_{0} and *h*_{0} initial conditions at time *t*_{0}. By taking advantage of the timescale separation, we can write an exact self-consistent solution to Eq. (4) at all times *t* (see Methods and Supplementary Information).

We assume that 〈*H*〉 switches to 〈*H*〉_{max} at equally spaced intervals *t*_{1} , … , *t _{N}*, each with the same duration Δ

*T*. After a large number of inputs, the system reaches a time-periodic steady-state (see Fig. 2d-e). Thus, habituation is quantified by the change in the average response of the system:

where *t*_{1} is the time of the first signal, and *t*_{∞} is the time of a signal at the steady state. Whenever Δ 〈*U*〉 < 0, the system is habituating to the external inputs. In Figure 2a, we study habituation as a function of the inverse temperature *δ* and the energetic cost of storage, *σ* (see Methods). As expected, habituation is stronger at small σ, where a large storage production provides a strong negative feedback to the receptor, sharply decreasing 〈*U*〉.

During its dynamical evolution, the system encodes information on the environment *H*. We are particularly interested in how much information is captured by the readout population, which is measured by the mutual information between *U* and *H* at time *t* (see Methods):

where *H[p](t)* is the Shannon entropy of the probability distribution *p*, and *p _{U}|_{H}* denotes the conditional probability distribution of

*U*given

*H*.

*I*quantifies the system performance in terms of the information that the readout population captures on the external input at each time. Furthermore, it coincides with the entropy increase of the readout distribution:

_{U,H}In Figure 2b, we show how the corresponding information gain Δ*I _{U,H}*, defined in analogy to Eq. (5), changes with

*β*and

*σ*. We find a region where the information gain is maximal. Surprisingly, this region corresponds to intermediate values of Δ 〈

*U*〉, suggesting that strong habituation driven by a low energetic cost of storage is ultimately detrimental to the system.

We can understand this feature by introducing the feedback information

which quantifies how much the simultaneous knowledge of *U* and *S* increases *I _{U,H}* with respect to knowing solely

*U*. We find that, during repeated external stimulation, the change in feedback information ΔΔ

*I*, again in analogy to Eq. (5), may be negative (Figure 2c). This indicates that the negative feedback on the receptor is impeding the information-theoretic performances of the system, independently of the habituation strength. Crucially, ΔΔ

_{f}*I*sharply increases in the region of maximal information gain, hinting that, at intermediate values of habituation, the information gain in the readout is driven by the storage mechanism. For the sake of simplicity, and to emphasize the information-theoretic advantage, we refer to this region of maximal information gain and intermediate habituation as the “onset” of habituation.

_{f}In Figure 2(d-g), we show the evolution of the system for values of (*β, σ*) that lie in the region of maximal information gain. The readout activity decreases in time, modulating the response of the system to the repeated input (Figure 2d). This behavior resembles recent observations on habituation under analogous external conditions in various experimental systems [14, 15, 49–51]. We emphasize that the readout population is the fastest species at play, hence each point of the trajectory 〈*U*〉 (*t*) corresponds to a steady-state solution. As expected, the reduction of 〈*U*〉 is a direct consequence of the increase of the average storage population, 〈*S*〉 (Figure 2e). In this region, both the increase of *I _{U,H}* and Δ

*I*over time during habituation are optimal (Figure 2f). This behavior may seem surprising, since the increase in

_{f}*I*is concomitant to a reduction of the population that is encoding the signal. However, let us note that the mean of

_{U,H}*U*is not directly related to the factorizability of the joint distribution

*p*, i.e., to how much information on the signal is encoded in the readout. Furthermore, the inhibitory effect provided by

_{U,H}*S*is enhanced by repeated stimuli, generating a stronger dependency between

*H*and

*U*over time.

The increase of *I _{U,H}* comes along with another intriguing result. Since during habituation, the concentrations of the internal populations

*U*and

*S*change in time, we can quantify how much energy is required to support these processes. The rate of dissipation into the environment due to these internal mechanisms is (see Methods):

We refer to as the internal dissipation of the system and, in Figure 2h, we show that it decreases over time, hinting at a synergistic thermodynamic advantage the onset of habituation.

## Maximal information gain from an optimization principle

We now investigate whether the region of maximal information gain can be retrieved by means of an a priori optimization principle. To do so, we focus on the case of a constant environment. In this scenario, the system can tune its internal parameters to optimally respond to the statistics of an external input during a prolonged stimulation, i.e., the system “learns” the parameters while measuring an input with a large (infinite) observation time. Thus, the input statistics is given by .

When the system reaches its steady state, the information that the readout has on the signal, , is estimated from the joint probability (Figure 3a). At the same time, however, the system is consuming energy to maintain the receptor active. The receptor dissipation per unit temperature is given by

as we show in Figure 3b. Large values of the mutual information compatible with minimal dissipation in the receptor can be obtained by maximizing the Pareto functional [66]:

where *γ* ∈ [0,1] sets the strategy implemented by the system. If *γ* ≪ 1, the system prioritizes minimizing dissipation, whereas if *γ* ≈ 1 it acts to preferentially maximize information. The set of (*β, σ*) that maximize Eq. (10) defines a Pareto optimal front in the (*δQ _{R}, I_{U,H}*) space (Figure 3c). At fixed receptor dissipation, this front represents the maximum information between the readout and the external input that can be achieved. The region below the front is therefore suboptimal. Instead, the points ab ove the front are inaccessible, as higher values of

*I*cannot be attained without increasing

_{U,H}*δQ*. We note that, since

_{R}*β*usually cannot be directly controlled by the system, the Pareto front indicates the optimal a to which the system tunes at fixed

*β*(see Methods and Supplementary Information for details).

Along this optimal front, we find that the system displays habituation (see Figure 3d). Furthermore, when plotted in the (*β, σ*) plane in the presence of a switching dynamical input, the front qualitatively corresponds to the region of maximal information gain and the onset of habituation, as we see in Figure 3e. Remarkably, this implies that once the system tunes its internal parameters to respond to a constant signal to maximize information and minimize dissipation, it also learns to respond optimally to the time-varying input in terms of information gain. In Figure 3f, we show that at fixed *β*, the Pareto front (gray area) represents the region around the peak of Δ*I _{U,H}*, where Δ 〈

*U*〉 attains intermediate values. In other words, the onset of habituation emerges spontaneously when the system attempts to activate its receptor as little as possible, while retaining information about the external environment.

## The role of information storage

The presence of a storage mechanism is fundamental in our model. Furthermore, its role in mediating the negative feedback is suggested by several experimental and theoretical observations [9, 29–33]. Crucially, whenever the storage is eliminated from our model, habituation cannot take place, highlighting its key role in driving the observed dynamics.

In Figure 4a, we show that habituation and the change in the storage, Δ 〈*S*〉, are deeply related to one another. The more <S> relaxes between two consecutive signals, the less the readout population reduces its activity. This ascribes to the storage population the role of an effective memory and highlights its dynamical importance for habituation. Moreover, the dependence of the storage dynamics on the interval between consecutive signals, Δ*T*, influences information gain as well. Indeed, increasing Δ*T*, we observe a decrease of the mutual information (Figure 4b) on the next stimulus, and the system needs to produce a larger number of readout units upon the new input. In the Supplementary Information, we further analyze the impact of different signal and pause durations.

We remark here that the proposed model is fully Markovian in its microscopic components, and the memory that governs readout habituation spontaneously emerges from the interplay among the internal timescales. In particular, recent works have highlighted that the storage needs to evolve on a slower timescale, comparable to that of the external input, in order to generate information in the receptor and in the readout [64]. In our model, instantaneous negative feedback implemented directly by *U* (bypassing the storage mechanism) leads to no time-dependent modulations (see Supplementary Information). Conversely, a readout population evolving on a timescale comparable to that of the signal cannot effectively mediate the negative feedback on the receptor since its population increase (as for the storage in the complete model) would not lead to habituation (see Supplementary Information). Thus, negative feedback has to be implemented by a separate degree of freedom evolving on a slow timescale.

## Minimal features of neural habituation

In neural systems, habituation is typically measured as a progressive reduction of the stimulus-driven neuronal firing rate [14, 15, 52, 53, 55]. To test whether our minimal model can be used to capture the typical neural habituation dynamics, we measured the response of zebrafish larvae to repeated looming stimulations via volumetric multiphoton imaging [67]. From a whole-brain recording of ≈ 55000 neurons, we extracted a subpopulation of ≈ 2400 neurons in the optic tectum with a temporal activity profile that is most correlated with the stimulation protocol (see Methods).

Our model can be extended to qualitatively reproduce some features of the progressive decrease in neuronal response amplitudes. We identify each single readout with a subpopulation of binary neurons. A fraction of neurons are randomly turned on each time a readout unit is activated (see Methods). We tune the model parameters to have a comparable number of total active neurons at the first stimulus with respect to the experimental setting. Moreover, we set the pause and stimulus durations in line with the typical timescales of the looming stimulation. We choose the model parameters *β* and *σ* in such a way that the system operates close to the peak of information gain and the activity decrease over time is comparable to the activity decrease in experimental data (see Supplementary Information). In this way, we can focus on the effects of storage and feedback mechanisms without modeling further biological details. The patterns of the model-generated activity are remarkably similar to the experimental ones (see Figure 5a). A 2dimensional embedding of the data via PCA (explained variance ≈ 70%) reveals that the evoked neural response is described by the first principal direction, while habituation is reflected in the second (Figure 5b). Remarkably, as we see in Figure 5c, this is the case in our minimal model as well, although the neural response is replaced by the switching on/off dynamics of the input.

# Discussion

In this work, we considered a minimal architecture that serves as a microscopic and archetypal description of sensing processes across biological scales. Informed by theoretical and experimental observations, our model includes three fundamental mechanisms: a receptor, a readout population, and a storage mechanism that drives negative feedback. We have shown that our model ro-bustly displays habituation under repeated external inputs, a widespread phenomenon in both biochemical and neural systems. We find a regime of parameters of maximal information gain, where habituation drives an increase in the mutual information between external input and the system’s response. Remarkably, the system can spontaneously tune to this region of parameters if it enforces an information-dissipation trade-off. In particular, optimal systems lie at the onset of habituation, characterized by intermediate levels of activity reduction, as both too-strong and too-weak negative feedback are detrimental to information gain. Our results suggest that the functional advantages of the onset of habituation are rooted in the interplay between energy dissipation and information gain.

Although minimal, our model can capture basic features of neural habituation, where it is generally accepted that inhibitory feedback mechanisms modulate the stimulus weight [54]. Remarkably, recent works reported the existence of a separate inhibitory neuronal population whose activity increases during habituation [15]. Our model suggests that this population might play the role of a storage mechanism, allowing the system to habituate to repeated signals. However, in neural systems, a prominent role in encoding both short- and long-term information is also played by synaptic plasticity [68, 69] as well as by memory molecules [42–44], at a biochemical level. A comprehensive analysis of how information is encoded and retrieved will most likely require all these mechanisms at once. Including an explicit connectivity structure with synaptic updates in our model may help in this direction, at the price of analytical tractability. Further, recent experiments also showed that by increasing the pause between two consecutive stimuli, the readout starts responding again, as theoretically predicted by our model [15]. Importantly, our framework allows us to formulate quantitative predictions of the system’s response to subsequent stimulation. In particular, the increase in pause durations will decrease the habituation strength, until a typical time at which habituation should disappear. Comparison with experiments by modulating the frequency and intensity of stimulation will help identify the model parameters characterizing the system under investigation and, as such, its information-theoretic performance. Overall, these results hint at the fact that our minimal architecture may lay the foundation of habituation dynamics across vastly different biological scales.

Extensions of these ideas are manifold. Other *a priori* optimization principles for the system should be considered, focusing in particular on more detailed and realistic molecular schemes. Upon these premises, the possibility of inferring the underlying biochemical structure from observed behaviors is a fascinating direction [49]. Furthermore, since we focused on repetitions of statistically identical signals, it will be fundamental to characterize the system’s response to diverse environments [70]. To this end, incorporating multiple receptors or storage populations may be needed to harvest information in complex conditions. In such scenarios, correlations between external signals may help reduce the encoding effort as, intuitively, S is acting as an information reservoir for the system. Moreover, such stored information might be used to make predictions on future stimuli and behavior, even if the detailed biological implementation of this complex task is still to be explored [56–58]. Indeed, living systems do not passively read external signals but often act upon the environment. Both storage mechanisms and their associated negative feedback will remain core modeling ingredients, paving the way to understanding how this encoded information guides learning, predictions, and decision-making, a paramount question in many fields.

Our work serves as a fundamental framework for these ideas. On the one hand, it encapsulates key ingredients to support habituation while still being minimal enough to allow for analytical treatment. On the other hand, it may help the experimental quest for signatures of these physical ingredients in a variety of systems. Ultimately, our results show how habituation - a ubiquitous phenomenon taking place at strikingly different biological scales - may stem from an information-based advantage, shedding light on the optimization principle underlying its emergence and relevance for any biological system.

# Acknowledgements

G.N., S.S., and D.M.B. acknowledge Amos Maritan for fruitful discussions. D.M.B. thanks Paolo De Los Rios for insightful comments. G.N. and D.M.B. acknowledge the Max Planck Institute for the Physics of Complex Systems for hosting G.N. during the initial stage of this work.

# Methods

### Model parameters.

The system is driven out of equilibrium by both the external field and the storage inhibition through the receptor dynamics, whose dissipation per unit temperature is *δQ _{R}.* The energetic barrier (

*V − cr*) fixes the average values of the readout population both in the passive and active state, namely 〈

*U*〉

_{P}=

*e*and 〈

^{−βV}*U*〉

_{A}=

*e*(see Eq. (2)), and κ controls the effectiveness of the storage in inhibiting the receptor’s activation. We assume that, on average, the activation rate due to the field is balanced by the feedback of a fraction

^{−β(V−c)}*α*= 〈

*S*〉/

*N*of the storage population,

_{S}so that we only need to fix *α*. Moreover, ΔE = 1, 〈*U*〉_{A} = 150, 〈*U*〉_{P} = 0.5, *N _{S}* = 25, and

*α*= 2/3. We remark that the emerging features of the model are independent of the specific choice of these parameters. They affect the number of active units at each time step, but all the results presented here on the information gain during habituation remain valid. Furthermore, we typically consider the average of the exponentially distributed signal to be 〈

*H*〉

_{max}= 10 and 〈

*H*〉

_{min}= 0.1 (see Supplementary Information for details).

Overall, we are left with *β* and *σ* as free parameters. *β* quantifies the amount of thermal noise in the system, and at small *β* the thermal activation of the receptor hinders the effect of the field and makes the system almost unable to process information. Conversely, if *β* is high, the system must overcome large thermal inertia, increasing the dissipative cost. In this regime of weak thermal noise, we expect that, given a sufficient amount of energy, the system can effectively process information.

### Timescale separation.

We solve our system in a timescale separation framework [64, 71, 72], where the storage evolves on a timescale that is much slower than all the other internal ones, i.e.,

The fact that *τ _{S}* is the slowest timescale at play is crucial to making these components act as an information reservoir. This assumption is also compatible with biological examples. The main difficulty arises from the presence of the feedback, i.e. the field influences the receptor and thus the readout population, which in turn impacts the storage population and finally changes the deactivation rate of the receptor - schematically,

*H*→

*R*→

*U*→

*S*→

*R*, but the causal order does not reflect the temporal one.

We start with the master equation for the propagator *P(u, r, s, h, t*|*u*_{0}, *r*_{0}, *s*_{0}, *h*_{0}, *t*_{0}),

We rescale the time by *τ _{S}* and introduce two small parameters to control the timescale separation analysis,

*ε*=

*τ*/

_{U}*τ*and

_{R}*δ*=

*τ*/

_{R}*τ*. Since

_{H}*τ*/

_{S}*τ*= O(1), we set it to 1 without loss of generality. We then write

_{H}*P*=

*P*

^{(0)}+ ε

*P*

^{(1)}and expand the master equation to find , with . We obtain that Π obeys the following equation:

Yet again, Π = Π^{(0)} + *δ*Π^{(1)} allows us to write Π^{(0)} = at order *O*(*δ*^{−1}), where . Expanding first in *ε* and then in *δ* sets a hierarchy among timescales. Crucially, due to the feedback present in the system we cannot solve the next order explicitly to find *F*. Indeed, after a marginalization over*r*, we find *F*, at order **O**(1), where . Hence, the evolution operator for *F* depends manifestly on *s*, and the equation cannot be self-consistently solved. To tackle the problem, we first discretize time, considering a small interval, i.e., *t* = *t*_{0} + Δ*t* with Δ*t* ≪ *τ _{U}* and thus . We thus find

*F*(

*s, h, t*|

*s*

_{0},

*h*

_{0},

*t*

_{0}) =

*P*(

*s,t*|

*s*

_{0},

*t*

_{0})

*P*(

_{H}*h, t*|

*h*

_{0},

*t*

_{0}) in the domain

*t*

*ε*[

*t*

_{0},

*t*

_{0}+ Δ

*t*], since

*H*evolves independently from the system (see also Supplementary Information for analytical steps).

Iterating the procedure for multiple time steps, we end up with a recursive equation for the joint probability *pU, R, S, H* (*u, r, s, h, t*_{0} + Δ*t*). We are interested in the following marginalization

where *P*(*s ^{′},t → s,t + Δt*) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the timescale separation approximation. Notice that this solution requires the knowledge of

*p*at the previous time-step and it has to be solved iteratively.

_{U,S}### Explicit solution for the storage propagator.

To find a numerical solution to our system, we first need to compute the propagator *P*(*s _{0}, t_{0} → s,t*). Formally, we have to solve the master equation

where we used the shorthand notation *P*(*s*_{0} → *s*) = (*s*_{0}, *t*_{0} → *s,t*). Since our formula has to be iterated for small time-steps, i.e., *t* − *t*_{0} = Δt ≪ 1, we can write the propagator as follows

where *w _{v}* and

*λ*are respectively eigenvectors and eigenvalues of the transition matrix

_{v}*Ŵ*(

_{S}*u*

_{0}),

and the coeffiecients *a ^{(ν)}* are such that .

Since eigenvalues and eigenvectors of *Ŵ _{S}* (

*u*

_{0}) might be computationally expensive to find, we employ another simplification. As Δ

*t*→ 0, we can restrict the matrix only to jumps to the n-th nearest neighbors of the initial state (

*s*

_{0},

*t*

_{0}), assuming that all other states are left unchanged in small time intervals. We take

*n*= 2 and check that the accuracy of this approximation against the full simulation for a limited number of time-steps.

### Mean-field relationship.

We note that 〈*U*〉 and 〈*S*〉 satisfies the following mean-field relationship:

where *f*_{0}(*x*) is an analytical function of its argument (see Supplementary Information). Eq. (S1) clearly states that only the fraction of active storage units is relevant to determining the habituation dynamics.

### Mutual information.

Once we have *p _{U} (u, t)* (obtained marginalizing

*p*over s) for a given

_{U,S}*p*, we can compute the mutual information

_{H}(h, t)where * H* is the Shannon entropy. For the sake of simplicity, we consider that the external field follows an exponential distribution

*p*=

_{H}(h, t)*λ(t)e*. Notice that, in order to determine such quantity, we need the conditional probability

^{−λ(t)h}*p*. In the Supplementary Information, we show how all the necessary joint and conditional probability distributions can be computed from the dynamical evolution derived above.

_{U|H}(u, t)We also highlight here that the timescale separation implies *I _{S, H}* = 0, since

Although it may seem surprising, this is a direct consequence of the fact that *S* is only influenced by *H* through the stationary state of *U*. Crucially, the presence of the feedback is still fundamental to promote habituation. Indeed, we can always write the mutual information between the field *H* and both the readout *U* and the storage *S* together as *I _{(U,S), H}* = Δ

*I*+

_{f}*I*, where Δ

_{U, H}*I*=

_{f}*I*−

_{(U,S) ,H}*I*=

_{U, H}*I*−

_{(U, H), S}*I*. Since Δ

_{U, S}*I*> 0 (by standard information-theoretic inequalities), the storage is increasing the information of the two populations together on the external field. Overall, although

_{f}*S*and

*H*are independent in this limit, the feedback is paramount in shaping how the system responds to the external field and stores information about it.

### Dissipation of internal processes

The production of readout, *u*, and storage, *s*, molecules requires energy. From the modeling of their dynamics as controlled stochastic birth-and-death processes, we quantify the dissipation into the environment using the environmental contribution of the Schnakenberg entropy production, which is also the only one that survives at stationarity [73]. We have:

where we indicated all possible dependencies in the joint probability distribution. By employing the timescale separation [71], and noting that Γ_{u→u±1} do not depend on s, we finally have:

As this quantity decreases during habituation, the system tends to dissipate less and less into the environment to produce the internal populations that are required to encode and store the external signal.

### Pareto optimization.

We perform a Pareto optimization at stationarity in the presence of a prolonged stimulation. We seek the optimal values of (*β, σ*) by maximizing the functional

where *γ* ∈ [0,1]. Hence, we maximize the information between the readout and the field, simultaneously minimizing the dissipation of the receptor induced by both the signal and feedback process, as discussed in the main text. The values are normalized since, in principle, they can span different orders of magnitudes. In the Supplementary Information, we detailed the derivation of the Pareto front and show that qualitatively similar results can be obtained for a 3-d Pareto-like surface obtained by maximizing also the feedback information, Δ*I _{f}*.

### Recording of whole brain neuronal activity in zebrafish larvae.

Acquisitions of the zebrafish brain activity were carried out in one Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28°C on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). The subject was embedded in 2 percent agarose gel and brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Data were acquired at 30 frames per second covering an effective field of view of about 450 × 900um with a resolution of 512 × 1024 pixels. The volumetric module acquires a volume of about 180 − 200um in thickness encompassing 30 planes separated by about 7um, at a rate of 1 volume per second, sufficient to track the slow dynamics associated with the fluorescence-based activity reporter GCaMP6s. Visual stimulation was presented in the form of a looming stimulus with 150s intervals, centered with the fish eye (see Supplementary Information). Neurons identification and anatomical registrations were performed as described in [67].

### Data analysis.

The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a 3s rectangular window, and automatic segmentation. The obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 55000 cells with a linear regression model using a set of base functions representing the expected responses to each stimulation event. These base functions have been obtained by convolving the exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterizing the presentation of the corresponding visual stimulus. The resulting score coefficients of the fit were used to extract the cells whose score fell within the top 5% of the distribution, resulting in a population of ≈ 2400 neurons whose temporal activity profile correlates most with the stimulation protocol. The resulting fluorescence signals *F ^{(i)}* were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [74]. See Supplementary Information.

### Model for neural activity.

Here, we describe how our framework is modified to mimic neural activity. Each readout unit, u, is interpreted as a population of *N* neurons, i.e., a region dedicated to the sensing of a specific input. Storage can be implemented, for example, as an inhibitory neural population, in line with recent observations in [15]. When a readout population is activated at time *t*, each of its *N* neurons fires with a probability *p*. We set *N* = 20 and *p* = 0.5. *N* has been set to have the same number of observed neurons in data and simulations, while p only controls the dispersal of the points in Fig. 5c, thus not altering the main message. The dynamics of each readout unit follows our dynamical model. Due to habituation, some of the readout units activated by the first stimulus will not be activated by subsequent stimuli. Although the evoked neural response cannot be captured by this extremely simple model, its archetypal ingredients (dissipation, storage, and feedback) are informative enough to reproduce the low-dimensional habituation dynamics found in experimental data.

# S1. Detailed Solution of the Master Equation

Consider the transition rates introduced in the main text:

We set a reflective boundary for the storage at *s* = *N _{S}*, corresponding to the maximum amount of storage molecules in the system. Moreover, for the sake of simplicity, we take . Retracing the steps of the Methods, the Master Equation governing the evolution of the propagator of all variables,

*P*(

*u, r, s, h, t*|

*u*

_{0},

*r*

_{0},

*s*

_{0},

*h*

_{0},

*t*

_{0}), is:

We solve this equation employing a timescale separation, i.e., *τ _{U}* ≪

*τ*≪

_{R}*τ*~

_{S}*τ*, where for

_{H}*X*=

*U, R, S*and

*τ*is the typical timescale of the signal dynamics. Motivated by several biological examples, we assumed that the readout population undergoes the fastest dynamics, while storage and signal evolution are the slowest ones. Defining

_{H}*ε*=

*τ*/

_{U}*τ*and

_{R}*δ*=

*τ*/

_{R}*τ*, and setting

_{H}*τ*/

_{S}*τ*= 1 without loss of generality, we have:

_{H}We propose a solution in the following form, *P* = *P*^{(0)} + *εP*^{(1)}. By inserting this expression in the equation above, and solving order by order in *ε*, at order *ε*^{−1}, we have that:

where *p*^{st} solves the master equation for the readout evolution at a fixed *r*:

with *α(r)* = *e ^{−β(V −cr)}*. Hence,

At order *ε*^{0}, we find the equation for Π, also reported in the Methods:

To solve this equation, we propose a solution of the form Π = Π^{(0)} + δΠ^{(1)}. Hence, again, at order *δ*^{−1}, we have that , where satisfy the steady-state equation for the fastest degree of freedom, with all the others fixed. In the case, it is just the solution of the rate equation for the receptor:

where , and the same for the reverse reaction. At order *δ*^{−1}, we have an equation for *F*:

As already explained in the Methods, due to the feedback, this equation cannot be solved explicitly. Indeed, the operator governing the evolution of F is:

with and using the linearity of *Ŵ _{S}(u)*. In order to solve this equation, we shall assume that

*ū(s,h)*=

*u*

_{0}, bearing in mind that this approximation holds if

*t*is small enough, i.e.,

*t*=

*t*

_{0}+ Δ

*t*with Δ

*t*≪

*τ*. Therefore, for a small interval, we have:

_{u}Overall, we end up with the following joint probability of the model at time *t*_{0} + Δ*t*:

where ∫ *dh*_{0}*P _{H}*(

*h, t*

_{0}+ Δ

*t*|

*h*

_{0},

*t*

_{0})

*p*(

_{U, S, H}*u*

_{0},

*s*

_{0},

*h*

_{0},

*t*

_{0}) =

*p*(

_{U, S}*u*

_{0},

*s*

_{0},

*t*

_{0})

*pH*(

*h, t*

_{0}+ Δ

*t*) since

*H*at time

*t*

_{0}+ Δ

*t*is independent of

*S*and

*U*. When propagating the evolution through intervals of duration Δ

*t*, we also assume that

*H*evolves independently since it is an external variable, while affecting the evolution of the other degrees of freedom. This structure reflects into the equation above. For simplicity, we prescribe

*p*(

_{H}*h, t*) to be an exponential distribution,

*p*(

_{H}*h, t*) =

*λ(t)e*

^{−λ(t)h}, and solve iteratively Eq. (S12) from

*t*

_{0}to a given

*T*in steps of duration Δ

*t*, as indicated above. This complex iterative solution arises from the timescale separation because of the cyclic feedback structure: {

*S, H*} →

*R*→

*U*→

*S*. This solution corresponds explicitly to

where *P*(*s*^{′}, *t* → *s, t* + Δ*t*) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the time-scale separation approximation. Notice that this solution requires the knowledge of *p _{U, S}* at the previous time-step and it has to be solved iteratively. Both

*p*and

_{U}*p*can be obtained by an immediate marginalization.

_{S}As detailed in the Methods, the propagator *P*(*s*_{0}, *t*_{0} → *s, t*), when restricted to small time intervals, can be obtained by solving the birth-and-death process for storage molecules at fixed readout, limiting the state space only to *n* nearest neighbors (we checked that our results are robust increasing *n* for the selected simulation time step).

# S2. Information-Theoretic Quantities

By direct marginalization of Eq. (S13), we obtain the evolution of *p _{U}* (

*u, t*) and

*p*(

_{S}*s, t*) for a given

*p*(

_{H}*h, t*). Hence, we can compute the mutual information as follows:

where * H*[

*p*] is the Shannon entropy of

_{X}*X*, and Δ𝕊

_{U}is the reduction in the entropy of

*U*due to repeated measurement (see main text). Notice that, in order to determine such quantity, we need the conditional probability

*p*(

_{U|H}*u, t*). This distribution represent the probability that, at a given time, the system jumps at a value

*u*in the presence of a given field

*h*. In order to compute it, we can write

by definition. The only dependence on *h* enters in through the *e ^{βh}* dependence in the rates.

Analogously, all the other mutual information can be obtained. Notably, as we showed in the Methods, *I _{S, H}* = 0, as a consequence of the time-scale separation. Crucially, the presence of the feedback is still fundamental to effectively process information about the signal. This effect can be quantified through Δ

*I*=

_{f}*I*

_{(U,S),H}−

*I*> 0, which we name feedback information, as it captures how much the knowledge of

_{U,H}*S*and

*U*together helps encoding information about the signal with respect to

*U*alone. In terms of system entropy, we equivalently have:

that highlights how much the effect of *S* (feedback) reduces the entropy of the system due to repeated measurements.

Practically speaking, in order to evaluate *I( _{U,S),H}*, we exploit the following equality:

for which we need *p*_{U,S|H}, that can be found noting that

from which we immediately see that

which we can easily computed at any given time *t*.

# S3. Mean-Field Relation Between Average Readout and Storage

Fixing all model parameters, the average value of storage, 〈*S*〉, and readout, 〈*U*〉, is numerically determined by solving iteratively the system, as shown above. However, an analytical relation between these two quantities can be found starting from the definition of 〈*U*〉:

Then, inserting the expression for the stationary probability that we know analytically:

where has a complicated expression involving the hypergeometric function _{2}*F*_{1} in terms of model parameters and only the concentration of *S*, *ρS* = *s*/*N _{S}* (the explicit derivation of this formula is not shown here). Then, we have:

Since we do not have an analytical expression for , we employ the mean-field approximation, reducing all the correlation functions to products of averages:

where . This clearly shows that, given a set of model parameters, 〈*U*〉 and the average concentration of storage, are related. In particular, introducing the change of parameters presented in the Methods, we have the following collapse:

where 〈*U*〉_{A} and 〈*U*〉_{P} are respectively the average of *U* fixing *r* = 1 (active receptor) and *r* = 0 (passive receptor). It is also possible to perform an expansion of *f*_{0} which numerically results to be very precise:

where . Since all these relations just depend on the average concentration of the storage, it is natural to ask what happens when . Fixing all the remaining parameters, both 〈*U*〉 and will change, still satisfying the mutual relation presented above. Let us consider, for , the stationary solution that has the same concentration of S, i.e., . As a consequence of the scaling relation, . Considering 〈*U*〉_{P} ≈ 0 in both settings, we can ask ourselves what is the factor γ such that . Since u only enters linearly in the dynamics of the storage, and the mutual relation only depends on the concentration of *S*, we guess that *γ* = 1/*n*, as numerically observed. As stated in the main text, we can finally conclude that the storage concentration is the most relevant quantity in our model to determine the effect of the feedback and characterize the dynamical evolution. This observation makes our conclusions more robust, as they do not depend on the specific choice for the storage reservoir since there always exists a scaling relation connecting 〈*U*〉 and . As such, changing the value of the model parameters we fixed, will only affect the number of active molecules without modifying the main results presented in this work.

# S4. The Necessity of Storage

Here, we discuss in detail the necessity of slow storage implementing the negative feedback to have habituation. We will first investigate the possibility that negative feedback, necessary for any kind of habituative behaviors, is implemented directly through the readout population that undergoes a fast dynamics. We will analytically show that this limit leads to the absence of habituation, hinting at the necessity of having a slow dynamical feedback in the system (Sec. S4 1). Then, we will study the system in the scenario in which *U* applies the feedback, bypassing the storage *S*, but it acts as a slow variable. Solving the Master Equation through our iterative numerical method, we show that, also in this case, habituation disappears (Sec. S4 2). These results suggest that not only the feedback must be applied by a slow variable, but that such a slow variable must have a role different from the readout population, in line with recent observations in neural systems [15]. The model proposed in the main text is indeed minimal in this respect, other than compatible with biological examples.

## 1. Dynamical feedback cannot be implemented by a fast readout

If the storage is directly implemented by the readout population, the transition rates get modified as follows:

At this level, *θ* is a free parameter playing the same role as *κ/N _{s}* in the complete model with the storage. We start again from the master equation for the propagator

*P*(

*u, r, h, t*|

*u*

_{0},

*r*

_{0},

*h*

_{0},

*t*

_{0}):

where *τ _{U}* ≪

*τ*≪

_{R}*τ*, since we are assuming, as before, that

_{H}*U*is the fastest variable. Here,

*ε*=

*τ*/

_{U}*τ*and

_{R}*δ*=

*τ*/

_{R}*τ*. Notice that now

_{H}*Ŵ*depends also on

_{R}*u*. We can solve the system again by resorting to a timescale separation and scaling the time by the slowest timescale,

*τ*. We have:

_{H}We now expand the propagator at first order in *ε*, *P* = *P*^{(0)} + *εP*^{(1)}. Then, the order *ε*^{−1} of the master equation gives, as above, . At order *ε*^{0}, Eq. (S28) leads to

To solve this, we expand the propagator as ∏ = ∏^{(0)} + *δ*∏^{(1)} and, at order *δ*^{−1}, we obtain:

This is a 2 × 2 effective matrix acting on ∏^{(0)}, where the only rate affected by *u* is , which multiplies the active states, i.e., *r* = 1. This equation can be analytically computed and the solution of Eq. (S30) is:

with log(Θ) = *e*^{−β(V −c)} (*e*^{βθ} − 1). Clearly, does not depend on *u* since we summed over the fast variable. Going on with the computation, at order *δ*^{0}, we obtain:

So that the full propagator results to be:

From this expression, we can find the joint probability distribution, following the same steps as before:

As expected, since *U* relaxes instantaneously, the feedback is instantaneous as well. As a consequence, the timedependent behavior of the system is solely driven by the external field *H*, with a fixed amplitude that takes into account the effect of the feedback only on average. This means that there will be no dynamic reduction of activity and, as such, no habituation in this scenario. This was somehow expected, since all variables are faster than the external field and, as a consequence, the feedback cannot be implemented over time. The first conclusion is that the variable implementing the feedback has to evolve together with *H*.

## 2. Effective dynamical feedback requires an additional population

We now assume that the feedback is, again, implemented by *U*, but it acts as a slow variable. Formally, we take *τ _{R}* ≪

*τ*≈

_{U}*τ*. Rescaling the time by the slowest timescale,

_{H}*τ*(works the same for

_{H}*τ*), we have:

_{U}with *ε* = *τ _{R}*/

*τ*. We now expand the propagator at first order in

_{H}*ε*,

*P*=

*P*

^{(0)}+

*εP*

^{(1)}. Then, the order

*ε*

^{−1}of the master equation is simply

*Ŵ*

_{R}P^{(0)}= 0, whose solution gives . At order

*ε*

^{0}:

The only dependence on *r* in *Ŵ _{U}(r)* is through the production rate of

*U*. Indeed, the effective transition matrix governing the birth-and-death process of readout molecules is characterized by:

This rate depends only on *h*, but *h* evolves in time. Therefore, we should scan all possible (infinite) values that *h* takes and build an infinite dimensional transition matrix. In order to solve the system, imagine that we are looking at the interval [*t*_{0}, *t*_{0} + Δ*t*]. Then, we can employ the following approximation if Δ*t* ≪ *τ _{H}*:

Using this simplification, we need to solve the following equation:

The explicit solution in the interval *t ε* [*t*_{0}, *t*_{0} + Δ*t*] can be found to be:

with a propagator. The full propagator at time *t*_{0} + Δ*t* is then:

Integrating over the initial conditions, we finally obtain:

To numerically integrate this equation, we make two approximations. The first one is that we solve the dynamics in all intervals in which the field does not evolve, where *P _{H}* is a delta function peaked at the initial condition. For all time points in which the field changes, this amounts to considering the field at the previous instant, a good approximation as long Δ

*t*≪

*τ*, particularly when the time dependence of the field is a square wave, as in our case.

_{H}The second approximation is to compute the propagator of *P _{U}*. As explained in the Methods of the main text, we restrict our computation to the transitions between

*n*nearest neighbors in the

*U*space. In the case of transitions only among next-nearest neighbors, we have the following dynamics:

with the transition matrix:

the diagonal is fixed to satisfy the conservation of normalization, as usual. The solution is:

where *w _{ν}* and

*λ*are respectively eigenvectors and eigenvalues of the transition matrix

_{ν}*W*

^{nn}. The coefficients

*a*have to be evaluated according to the condition at time

^{(ν)}*t*

^{0}:

where *δ*_{u, u0} is the Kroenecker’s delta. To evaluate the information content of this model, we also need:

In Figure S6 we show that, in this model, *U* does not display habituation. Rather, it increases upon repeated stimuli, acting as the storage in the main text. On the other hand, the probability of the receptor being active does habituate. This suggests that habituation can only occur in fast variables modulated by slow variables.

It is straightforward to intuitively understand why a direct feedback from *U*, with this population undergoing a slow dynamics, cannot lead to habituation. Indeed, at a fixed distribution of the external signal, the stationary solution of 〈*U*〉 already takes into account the effect of the negative feedback. Hence, if the system starts with a very low readout population (no signal), the dynamics induced by a switching signal can only bring 〈*U*〉 to its steady state with intervals in which the population will grow and intervals in which it decreases. Naively speaking, the dynamics of 〈*U*〉 becomes similar to the one of the storage in the complete model, since it is actually playing the same role of storing information in this simplified context.

# S5. Robustness of Optimality

## 1. Effects of the external signal strength and thermal noise level

In the main text, for analytical ease, we take the environment to be an exponentially distributed signal,

where *λ* is its inverse characteristic scale. In particular, we describe the case in which no signal is present by setting *λ* to be large, so that the typical realizations of *H* would be too small to activate the receptors. On the other hand, when *λ* is small, the values of *h* appearing in the rates of the model are large enough to activate the receptor and thus allow the system to sense the signal.

In the dynamical case, we take *λ(t)* to be a square wave, so that 〈*H*〉 = 1/*λ* alternates between two values 〈*H*〉_{min} and 〈*H*〉_{max}. We denote with *T*_{on} the duration of 〈*H*〉_{max}, and with *T*_{off} the one of 〈*H*〉_{min}. In practice, this signal mimics an on-off dynamics, where the stochastic signal is present only when its average is large enough, 〈*H*〉_{max}. In the main text, we take 〈*U*〉_{min} = 0.1 and 〈*H*〉_{max} = 10, with *T*_{on} = *T*_{off} = 100Δ*t*.

In Figure S7a, we study the behavior of the model in the presence of a static exponential signal, with average 〈*H*〉. We focus on the case of low *σ*, so that the production of storage is favored. As 〈*H*〉 decreases, *I _{U, H}* decreases as well. Hence, as expected, information acquired through sensing depends on the strength of the external signal that coincides with the energy input driving receptor activation. However, the system does not display for all parameters an emergent information dynamics, memory, and habituation. In Figure S7b, we see that, when the temperature is low but

*σ*is high, the system does not show habituation and Δ

*I*= 0. On the other hand, when thermal noise dominates (Figure S7c), even when the external signal is small, the system produces a large readout population due to random thermal activation. As a consequence, these random activation hinders the signal-driven ones, thus the system do not effectively sense the external signal even when present and

_{U, H}*I*is always small. It is important to remind here that, as we see in the main text in Figure 3b, at fixed

_{U, H}*σ*and as a function of

*ς*,

*I*is not monotonic. This is due to the fact that low temperatures typically favor sensing and habituation, but they also intrinsically suppress readout production. Thus, at high

_{U, H}*β*,

*σ*needs to be small to effectively store information since thermal noise is negligible. Vice versa, a small

*σ*is detrimental at high temperatures since the system produces storage as a consequence of thermal noise. This complex interplay is captured by the Pareto optimization, which gives us an effective relation between

*β*and

*σ*to maximize storage while minimizing dissipation.

## 2. 3-dimensional Pareto-like surface

In line with the front derived above, it is possible to maximize both *I _{U, H}* and Δ

*I*, while minimizing

_{f}*δQ*to study the region of parameters where habituation spontaneously emerges. In this case, the idea is to maximize all the features associated with the capability to process information, while maintaining a minimal dissipation. Since, as expected,

_{R}*I*and Δ

_{U, H}*I*are not in trade-off (see Figure S8), the resulting optimal area will be named Pareto-like surface. In Figure S8a, we represent it in the three-dimensional features space. Figure S8b-d only represents the pro jection of this area (in gray) into the parameters space, (

_{f}*β, σ*). In what follows, we study the robustness of the optimality of our model by showing both the 2-dimensional Pareto front (see main text) and the 3-dimensional Pareto-like surface in the (

*β, σ*) space. In fact, it is immediate to see from what we show below that the two only slightly differ, and considering one or the other does not qualitatively change the results and the conclusions of our work.

## 3. Static and dynamical optimality

In Figure S9, we plot the average readout population, 〈*U*〉, the average storage population, 〈*S*〉, the mutual information between them, *I _{U, S}*, and the entropy production of the internal processes, , as a function of

*β*and

*σ*and in the presence of a static field. The optimal values of

*β*and

*σ*obtained by minimizing the Pareto-like functional

are such that both 〈*U*〉 and 〈*S*〉 attain intermediate values. We also show the 2-dimensional Pareto front derived in the main text for comparison (dashed black line). Thus, large/small production of readout/storage is detrimental to sensing. Similar considerations can be drawn for the dissipation of internal processes, . Interestingly, the dependence between readout and storage, quantified by the mutual information *I _{U, S}*, is not maximized at optimality. This suggests that excessively strong negative feedback impedes information, while promoting dissipation in the receptor,

*δQ*, thus being suboptimal.

_{R}In Figure S10, we study the dynamical behavior of the model under a repeated external signal, as in Figure 3f-g-h in the main text. In particular, given an observable *O*, we define its change under a repeated signal Δ*O* as the difference between the maximal response to the signal at large times and the maximal response to the first signal (Figure S10a). In Figure S10b we see in particular that Δ〈*U*〉 is maximal in the region where the change in information feedback ΔΔ*I _{f}* is negative, suggesting that a strong habituation fueled by a large storage concentration is ultimately detrimental for information processing. Furthermore, in this region the entropy produced by internal processes, , is maximal.

## 4. Interplay between information storage and signal duration

In the main text and insofar, we have always considered the case *T*_{on} = *T*_{off}. We now study the effect of the signal duration and the pause length on sensing (Figure S11). If the system only receives short signals between long pauses, the slow storage build-up does not reach a high level of concentration. As a consequence, the negative feedback on the receptor is less effective and habituation is suppressed (Figure S11a). Therefore, the peak of Δ*I _{U, H}* in the (

*β, σ*) plane takes place below the optimal surface, as

*σ*needs to be smaller than in the static case to boost storage production during the brief periods in which the signal is present. On the other hand, in Figure S11b we consider the case of a long signal with short pauses. In this scenario, the slow dynamical evolution of the storage can reach large concentrations at larger values of

*σ*, thus moving the optimal dynamical region slightly above the Pareto-like surface. Considering the 2-dimensional Pareto front as a reference, it does not change qualitatively our observations. The case of a short signal is comparable to the durations of the looming stimulations in the experimental setting (see next Section), which can be used to tune the parameters of the model to the peak of information gain.

# S6. Experimental Setup

Acquisitions of the zebrafish brain activity were carried out in Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28°*C* on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). Larvae were embedded in 2 percent agarose gel and their brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Briefly, the imaging path is based on an 8-kHz galvo-resonant commercial 2P design (Bergamo I Series, Thorlabs, Newton, NJ, United States) coupled to a Ti:Sapphire source (Chameleon Ultra II, Coherent) tuned to 920nm for imaging GCaMP6 signals and modulated by a Pockels cell(Conoptics). The fluorescence collection path includes a 705 nm long-pass main dichroic and a 495nm long-pass dichroic mirror transmitting the fluorescence light toward a GaAsP PMT detector (H7422PA-40, Hamamatsu) equipped with EM525/50 emission filter. Data were acquired at 30 frames per second, using a water dipping Nikon CFI75 LWD 16X W objective covering an effective field of view of about 450 × 900*um* with a resolution of 512 × 1024 pixels. The volumetric module is based on an electrically tunable lens (Optotune) moving continuously according to a saw-tooth waveform synchronized with the frame acquisition trigger. An entire volume of about 180 − 200*um* in thickness encompassing 30 planes separated by about 7*um* is acquired at a rate of 1 volume per second, sufficient to track the relative slow dynamics associated with the fluorescence-based activity reporter GCaMP6s.

As for the visual stimulation, looming stimuli were generated using Stytra and presented monocularly on a 50 × 50mm screen using a DPL4500 projector by Texas Instruments. The dark looming dot was presented 10 times with 150s interval, centered with the fish eye and with a l/v parameter of 8.3 s, reaching at the end of the stimulation a visual angle of 79.4° corresponding to an angular expansion rate of 9.5°/*s*. The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a rectangular window 3 second long, and automatic segmentation using Suite2P. Then, the obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 52,000 cells with a linear regression model (scikit-learn Python Library) using a set of base functions representing the expected responses to each of the stimulation events, obtained convolving an exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterized by an amplitude different from zero only during the presentation of the corresponding visual stimulus. The resulting coefficients were divided for the mean squared error of the fit to obtain a set of scores. The cells, whose score fell within the top 5 of the distribution, were considered for the dimensionality reduction analysis.

The resulting fluorescence signals *F*^{(i)}, for *i* = 1, … , *N*_{cells}, were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [74]. Briefly, for each time point *t*, we selected a window [*t* − *τ*_{2}, *t*] and evaluated the minimum smoothed fluorescence,

Then, the relative change in fluorescence signal,

is smoothed with an exponential moving average. Thus, the neural activity profile for the *i*-th cell that we use in the main text is given by

In accordance with the previous literature [74], we set *τ*_{0} = 0.2 s, *τ*_{1} = 0.75 s, and *τ*_{2} = 3 s. The qualitative nature of the low-dimensional activity in the PCA space is not altered by other sensible choices of these parameters.

# References

- [1]Information processing in living systems
*Annual Review of Condensed Matter Physics***7** - [2]Signaling networks: information flow, computation, and decision making
*Cold Spring Harbor perspectives in biology***7** - [3]Broken detailed balance and non-equilibrium dynamics in living systems: a review
*Reports on Progress in Physics***81** - [4]Information theory and adaptation
*Quantitative biology: from molecular to cellular systems***4** - [5]Biologically inspired information theory: Adaptation through construction of external reality models by living systems
*Progress in Biophysics and Molecular Biology***119** - [6]Progress in and promise of bacterial quorum sensing research
*Nature***551** - [7]Strategies for cellular decision-making
*Molecular systems biology***5** - [8]Amplification and adaptation in regulatory and sensory systems
*Science***217** - [9]Modeling the chemotactic response of escherichia coli to time-varying stimuli
*Proceedings of the National Academy of Sciences***105** - [10]The nonequilibrium mechanism for ultrasensitivity in a biological switch: Sensing by maxwell’s demons
*Proceedings of the National Academy of Sciences***105** - [11]Escherichia coli chemotaxis is information limited
*Nature Physics***17** - [12]Information transduction capacity of noisy biochemical signaling networks
*science***334** - [13]Tumor necrosis factor signaling
*Cell Death & Differentiation***10** - [14]Brain-wide visual habituation networks in wild type and fmr1 zebrafish
*Nature Communications***13** - [15]Neural circuits underlying habituation of visually evoked escape behaviors in larval zebrafish
*Elife***12** - [16]The energy-speed-accuracy trade-off in sensory adaptation
*Nature physics***8** - [17]Calcium signalling and regulation in olfactory neurons
*Current opinion in neurobiology***9** - [18]Visual adaptation: physiology, mechanisms, and functional benefits
*Journal of neurophysiology***97** - [19]Adaptation to stimulus contrast and correlations during natural visual stimulation
*Neuron***55** - [20]Adaptation maintains population homeostasis in primary visual cortex
*Nature neuroscience***16** - [21]Weak pairwise correlations imply strongly correlated network states in a neural population
*Nature***440** - [22]Searching for collective behavior in a large network of sensory neurons
*PLoS computational biology***10** - [23]Sparse and compositionally robust inference of microbial ecological networks
*PLoS computational biology***11** - [24]Collective states, multistability and transitional behavior in schooling fish
*PLoS computational biology***9** - [25]Mutual information disentangles interactions from changing environments
*Physical Review Letters***127** - [26]Mutual information in changing environments: non-linear interactions, out-ofequilibrium systems, and continuously-varying diffusivities
*Physical Review E***106** - [27]Advantages and limitations of current network inference methods
*Nature Reviews Microbiology***8** - [28]Information-driven transitions in projections of underdamped dynamics
*Physical Review E***106** - [29]Molecular and functional aspects of bacterial chemotaxis
*Journal of Statistical Physics***144** - [30]Design principles of a bacterial signalling network
*Nature***438** - [31]Effect of feedback on the fidelity of information transmission of time-varying signals
*Physical Review E***82** - [32]Accurate information transmission through dynamic biochemical signaling networks
*Science***346** - [33]Robustness in simple biochemical networks
*Nature***387** - [34]Thermodynamics of information
*Nature physics***11** - [35]Abc transporters are billion-year-old maxwell demons
*Communications Physics***6** - [36]A chemical reaction network implementation of a maxwell demon
*The Journal of Chemical Physics* - [37]The thermodynamics of computation — a review
*International Journal of Theoretical Physics***21** - [38]Minimal energy cost for thermodynamic information processing: measurement and information erasure
*Physical review letters***102** - [39]Nonequilibrium sensing and its analogy to kinetic proofreading
*New Journal of Physics***17** - [40]Chemical sensing by nonequilibrium cooperative receptors
*Physical review letters***110** - [41]Fundamental limits on the suppression of molecular fluctuations
*Nature***467** - [42]Camkii regulation in information processing and storage
*Trends in neurosciences***35** - [43]In search of the memory molecule
*Nature***535** - [44]The molecular basis of camkii function in synaptic and behavioural memory
*Nature Reviews Neuroscience***3** - [45]Thermodynamic costs of information processing in sensory adaptation
*PLoS computational biology***10** - [46]Efficiency of cellular information processing
*New Journal of Physics***16** - [47]Thermodynamics of computational copying in biochemical systems
*Physical Review X***7** - [48]Information thermodynamics for deterministic chemical reaction networks
*The Journal of Chemical Physics* - [49]Oscillatory stimuli differentiate adapting circuit topologies
*Nature methods***14** - [50]Depolarization block in olfactory sensory neurons expands the dimensionality of odor encoding
*Science Advances***8** - [51]Stress-induced dinoflagellate bioluminescence at the single cell level
*Physical Review Letters***125** - [52]Neuronal adaptation, novelty detection and regularity encoding in audition
*Frontiers in Systems Neuroscience***8**https://doi.org/10.3389/fnsys.2014.00111 - [53]Adaptation to sensory input tunes visual cortex to criticality
*Nature Physics***11** - [54]Inhibition drives habituation of a larval zebrafish visual response
*bioRxiv* - [55]Neural adaptation
*Current Biology***31** - [56]Encoding of temporal probabilities in the human brain
*Journal of Neuroscience***30** - [57]Learning to make external sensory stimulus predictions using internal correlations in populations of neurons
*Proceedings of the National Academy of Sciences***115** - [58]Predictive information in a sensory population
*Proceedings of the National Academy of Sciences***112** - [59]Defining network topologies that can achieve biochemical adaptation
*Cell***138** - [60]Hsp70 chaperones are non-equilibrium machines that achieve ultra-affinity by energy consumption
*Elife***3** - [61]Kinetic asymmetry allows macromolecular catalysts to drive an information ratchet
*Nature communications***10** - [62]Kinetic uncertainty relations for the control of stochastic reaction networks
*Physical review letters***123** - [63]Constraints on fluctuations in sparsely characterized biological systems
*Physical review letters***116** - [64]Information propagation in multilayer systems with higher-order interactions across timescales
*Physical Review X***14** - [65]Energy consumption and cooperation for optimal sensing
*Nature communications***11** - [66]Phase transitions in pareto optimal complex networks
*Physical Review E***92** - [67]Whole brain functional recordings at cellular resolution in zebrafish larvae with 3d scanning multiphoton microscopy
*Scientific reports***11** - [68]Synaptic plasticity: taming the beast
*Nature neuroscience***3** - [69]Synaptic plasticity and memory: an evaluation of the hypothesis
*Annual review of neuroscience***23** - [70]Information-based fitness and the emergence of criticality in living systems
*Proceedings of the National Academy of Sciences***111** - [71]Coarsegrained entropy production with multiple reservoirs: Unraveling the role of time scales and detailed balance in biology-inspired systems
*Physical Review Research***2** - [72]Multiple-scale stochastic processes: decimation, averaging and beyond
*Physics reports***670** - [73]Network theory of microscopic and macroscopic behavior of master equation systems
*Reviews of Modern physics***48** - [74]In vivo two-photon imaging of sensory-evoked dendritic calcium signals in cortical neurons
*Nature protocols***6**

# Article and author information

### Author information

## Version history

- Sent for peer review:
- Preprint posted:
- Reviewed Preprint version 1:

## Copyright

© 2024, Nicoletti et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

# Metrics

- views
- 124
- downloads
- 0
- citations
- 0

Views, downloads and citations are aggregated across all versions of this paper published by eLife.