Introduction

Sensing mechanisms in biological systems span a wide range of temporal and spatial scales, from cellular to multi-cellular level, forming the basis for decision-making and the optimization of limited resources [15]. Emergent macroscopic phenomena such as adaptation and habituation reflect the ability of living systems to effectively process the information they collect from their noisy environment [68]. Prominent examples include the modulation of flagellar motion operated by bacteria according to changes in the local nutrient concentration [911], the regulation of immune responses through feedback mechanisms [12, 13], the progressive reduction of neural activity in response to repeated looming stimulation [14, 15], and the maintenance of high sensitivity in varying environments for olfactory or visual sensing in mammalian neurons [1620].

In the last decade, advances in experimental techniques fostered the quest for the core biochemical mechanisms governing information processing. Simultaneous recordings of hundreds of biological signals made it possible to infer distinctive features directly from data [2124]. However, many of these approaches fall short of describing the connection between observed behaviors and underlying microscopic drivers [2528]. To fill this gap, several works focused on the architecture of specific signaling networks, from tumor necrosis factor [12, 13] to chemotaxis [9, 29], highlighting the essential structural ingredients for their efficient functioning. An observation shared by most of these studies is the key role of a negative feedback mechanism to induce emergent adaptive responses [3033]. Moreover, any information-processing system, biological or not, must obey information-thermodynamic laws that prescribe the necessity of a storage mechanism [34]. This is an unavoidable feature highlighted in numerous chemical signaling networks [9, 30] and biochemical realizations of Maxwell Demons [35, 36]. As the storage of information during processing generally requires energy [37, 38], sensing mechanisms have to take place out-of-equilibrium [3, 3941]. Recently, the discovery of memory molecules [4244] hinted at the possibility that storing mechanisms might be instantiated directly at the molecular scale. Overall, negative feedback, storage, and out-of-equilibrium conditions seem to be necessary requirements for a system to process environmental information and act accordingly. To quantify the performance of a biological information-processing system, theoretical developments made substantial progress in highlighting thermodynamics limitations and advantages [16, 45, 46], making a step towards linking information and dissipation from a molecular perspective [35, 47, 48].

Here, we consider an archetypal yet minimal model for sensing that is inspired by biological networks [16, 49, 50] and encapsulates all these key ingredients, i.e., negative feedback, storage, and energy dissipation, and study its response to repeated stimuli. Indeed, in the presence of dynamic environments, it is common for a biological system to keep encountering the same stimulus. Under these conditions, a progressive decay in the amplitude of the response is often observed, both at sensory and molecular levels. In general terms, such adaptive behavior is usually named habituation and is a common phenomenon recorded in various systems, from biochemical networks [49, 51, 52] to populations of neurons [14, 15, 53, 54]. In particular, habituation characterizes many neuronal circuits along the sensory-motor processing pathways in most living organisms, either invertebrates or vertebrates [53, 54], where inhibitory feedback mechanisms are believed to modulate the stimulus weight [15, 55]. Most importantly, the first complete characterization of habituating phenomena dates back to 1966 [56], when different hallmarks of habituation in vertebrate animals were characterized. Despite its widespread occurrence across remarkably different scales, the connection between habituation in the animal kingdom and brainless molecular systems has only recently attracted considerable attention. A limited number of dynamical models have been proposed to explore the similarities and differences between the manifestations of these two, fundamentally distinct, phenomena [57, 58]. However, dynamical characterizations of habituation still lack a clear identification of the functional role of habituation in regulating information flow, optimal processing, and sensitivity calibration [59], and in controlling behavior and prediction during complex tasks [6062].

Sketch of the model architecture and biological examples at different scales.

(a) A receptor R transitions between an active (A) and passive (P) state along two pathways, one used for sensing (red) and affected by the environment h, and the other (blue) modified by the energy of storage molecules, σs, tuned by inhibition strength κ and storage capacity NS. Here, β = BT)−1 encodes the inverse temperature. An active receptor increases the response of a readout population U (orange), which in turn stimulates the production of storage units S (green) that provide negative feedback to the receptor. (b) In the chemical network underlying chemotactic response, we can identify a similar architecture. The input ligand binds to membrane receptors, decreasing kinase activity and producing phosphate groups whose concentration regulates the receptor methylation level. (c) Similarly, in olfactory sensing, odorant binding induces the activation of adenylyl cyclase (AC). AC stimulates a calcium flux, eventually producing phosphorylase calmodulin kinase II (CAMKII) which phosphorylates and deactivates AC. (d) In neural response, multiple mechanisms take place at different scales. In zebrafish larvae, visual stimulation is projected along the visual stream from the retina to the cortex, a coarse-grained realization of the R-U dynamics. Neural habituation emerges upon repeated stimulation, as measured by calcium fluorescence signals (dF/F0) and by the corresponding 2-dimensional PCA of the activity profiles.

In this work, we explicitly compute the information shared between readout molecules and external stimulus over time. We find that the information gain peaks at intermediate levels of habituation, uncovering that optimal processing performances are necessarily tangled with maximal activity reduction. This region of optimal information gain can be retrieved by simultaneously minimizing dissipation and maximizing information in the presence of a prolonged stimulation, hinting at an a priori optimality condition for the operations of biological systems. Our results unveil the role of habituation in enhancing processing abilities and open the avenue to understanding the emergence of basic learning mechanisms in simple molecular scenarios.

Results

Archetypal model for sensing in biological systems

Several minimal models for adaptation are composed of three building blocks [9, 29, 4951]: one responsible for buffering the input signal; one representing the output; and one usually reminiscent of an internal memory. Here, we start with an analogous archetypal architecture. The three building blocks (or units) are represented by a receptor (R), and readout (U) and storage (S) populations.

To introduce our model in general terms, we consider a time-varying environment H, representing an external signal characterized by a probability pH (h, t) of being equal to h at time t. This input signal is read by the receptor unit R. The receptor can be either active (A), taking the value r = 1, or passive (P), r = 0, with these two states separated by an energetic barrier ΔE. The transitions between passive and active states can happen through two different pathways, a “sensing” reaction path (superscript H) that is stimulated by the external signal h, and a “internal” path (superscript I) that mediates the effect of the negative feedback from the storage unit (see Fig. 1a). We further assume, for simplicity, that the rates follow an effective Arrhenius’ law:

where the input is modeled as an additional thermodynamic driving with an energy βh, and = = sets the timescale of the receptor. In particular, g represents the ratio between the timescales of the two pathways, and the inverse temperature β = BT)−1 encodes the role of the thermal noise, as lower values of P are associated with faster reactions.

The negative feedback depends on the energy provided by the storage, σs, where s is the number of active storage molecules. The parameter κ represents the strength of the inhibition, and NS is the storage capacity. For ease of interpretation, we assume that the activation rate of the receptor due to a reference signal Href is balanced by the deactivation rate provided by the feedback of a fraction α = 𣤩S⟩/NS of average active storage population:

This condition sets the inhibition strength by choosing the inhibiting fraction α. At this stage, the reference signal represents the typical environmental stimulus to which the system is exposed. This choice rationalizes the physical meaning of the model parameters, but it does not alter the phenomenology of the system. Crucially, the presence of two different transition pathways, motivated by molecular considerations and pivotal in many energy-consuming biochemical systems [35, 63, 64], creates an internal non-equilibrium cycle in receptor dynamics. Without the storage population, the internal pathway would not be present and the receptor would satisfy an effective detailed balance.

Whenever active, the receptor drives the production of readout population U, which represents the direct response of the system to environmental signals. As such, it is the observable characterizing habituation (see Fig. 1a). We model its dynamics with a controlled stochastic birth- and-death process [6567]:

where u denotes the number of molecules, sets the timescale of readout production, and V is the energy needed to produce a readout unit. When the receptor is active, r = 1, this energetic cost is reduced by an effective additional driving μc. Active receptors transduce the environmental energy into an active pumping in the readout unit, allowing readout population to encode information on the external signal.

Finally, readout units stimulate the production of the storage population S. Its number of molecules s follows again a controlled birth-and-death process:

where σ is the energetic cost of a storage molecule and sets the timescale, i.e., . For simplicity, we assume that readout molecules can catalytically activate storage molecules from a passive pool (see Fig. 1a). Storage units are responsible for encoding the response, playing the role of a finite-time memory.

Our architecture, being devoid of specific biological details, can be adapted to describe systems operating at very different scales (Fig. 1b-d). However, we emphasize that the proposed model is intentionally oversimplified compared to realistic biochemical or neural systems, yet it contains the minimal ingredients for habituation to emerge naturally. As such, the examples shown in Fig. 1b-d are meant solely to illustrate the core architecture. In particular, while receptors can be readily identified, the role of readout is played by photo-receptors or calcium concentration for olfactory or visual sensing mechanisms [14, 15, 1720, 59], while storage may represent different molecular mechanisms at a coarse-grained level as, for example, memory molecules sensitive to calcium activity [42], synaptic depotentiation, and neural populations that regulate neuronal response [14, 15].

As a final remark, we expect from previous studies [67] that the presence of multiple timescales in the system will be fundamental in shaping information between the different components. Thus, we employ the biologically plausible assumption that U undergoes the fastest evolution, while S and H are the slowest degrees of freedom [29, 68]. We have that τuτRτSτH, where τH is the timescale of the environment.

The hallmarks of habituation

Habituation occurs when, upon repeated presentation of the same stimulus, a progressive decrease to an asymptotic level is observed in some parameters [56, 57]. In our model, the response of the system is represented by the average number of active readout units, 〈U⟩(t). This behavior resembles recent observations on habituation under analogous external conditions in various experimental systems [14, 15, 49, 51, 52]. However, habituation in its strict sense is not sufficient to encompass the diverse array of emergent features recorded in biological systems. In fact, several other hallmarks are closely associated with habituating behavior [5658]:

  1. Potentiation of habituation — After a train of stimulations and a subsequent short pause, the response decrement becomes more rapid and/or more pronounced.

  2. Spontaneous recovery — If, after response decrement, the stimulus is suppressed for a sufficiently long time, the response recovers at least partially at subsequent stimulations.

  3. Subliminal accumulation — The effect of stimulation may accumulate after the habituation level thus delaying the onset of spontaneous recovery.

  4. Intensity sensitivity — Other conditions being fixed, the less intense the stimulus, the more rapid and/or pronounced the response decrease.

  5. Frequency sensitivity — Other conditions being fixed, more frequent stimulation results in a more rapid and/or more pronounced response decrease.

Hallmarks of habituation.

(a) An external signal switch between two values, 〈Hmin = 0.1 (background) and 〈Hmax = Href = 10 (stimulus). The inter-stimuli interval is ΔT = 100(a.u.) and the duration of each stimulus Ts = 100(a.u.). The average readout population (black) follows the stimulation, increasing when the stimulus is presented. The response decreases upon repeated stimulation, signaling the presence of habituation. Conversely, the average storage population (grey) increases over time. The black dashed line represents the time to habituate t(hab) (Eq. (6)). (b) If the stimulus is paused and presented again after a short time, the system habituates more rapidly, i.e., the number of stimulations to habituate t(hab) is reduced. (c) After waiting a sufficiently long time, the response can be fully recovered. (d) If the stimulation continues beyond habituation, the time to recover the response t(recovery) (Eq. (7)) increases by an amount δt (in red). (e) The relative decrement of the average readout with respect to the initial response, 〈U(in) , shows that habituation becomes less and less pronounced as we increase 〈Hmax. (f) As expected, the initial response increases with 〈Hmax. (g) The relative difference between 〈U⟩ (t(hab)) and 〈H(in), Δ〈U⟩, decreases with the stimulus strength. (h) By changing ΔT and keeping the stimulus duration Ts fixed, we observe that more pronounced and more rapid response decrements are associated with more frequent stimulation. Parameters are reported in the Methods, and these hallmarks are qualitatively independent of their specific choice.

These hallmarks have been originally proposed from observations of vertebrate animals, but they are not the sole properties characterizing the most general definition of habituation. However, the list above encompasses the features that can be obtained from a single stimulation, as in our case, and without any ambiguity in the interpretation (for a detailed discussion, we refer to [56, 57]).

To explore the ability of the proposed archetypal mode to capture the aforementioned hallmarks, we consider the simple case of an exponential input distribution, pH(h,t) ~ exp [−h/〈H⟩ (t)] with uncorrelated signals, i.e., 〈h(t)h(t)⟩ = 〈H⟩(t) 〈H⟩ (t). The time-dependent average 〈H⟩ periodically switches between two values, 〈Hmin and 〈Hmax, corresponding to a (non-zero) background signal and a (strong) stimulation of the receptor, respectively. The system dynamics is governed by four different operators, ŴX, with X = R, U, S, H, one for each unit and one for the environment. The resulting master equation is:

where P denotes, in general, the joint propagator P(u, r, s, h, t|u0, r0, s0, h0, t0), with u0, r0, s0 and h0 initial conditions at time t0. By taking advantage of the timescale separation, we can write an exact self-consistent solution to Eq. (5) at all times t (see Methods and Supplementary Information).

In Fig. 2a, we show that the system exhibits habituation in its strict sense. Here, for simplicity, we consider a train of signals arriving at times t1, … , tN, each lasting a time Ts with equal pauses between them of duration ΔT. We define the time to habituate, t(hab), as the first time at which the relative change of our observable, 〈U⟩ (t), is less than 0.5%, in analogy to [57]. Clearly, t(hab) is associated with a number of stimuli necessary to habituate, n(hab), i.e.,

Our results do not qualitatively change when choosing a different threshold. Hallmark 1, potentiation of habituation, corresponds to a reduction of n(hab) after one series of stimulation and recovery. This implies a more rapid decrement in the response and a shorter time to achieve habituation, as we show in Fig. 2b. Analogously, hallmark 2 is presented in Fig. 2c, where we show that by suppressing the stimulus for a sufficiently long amount of time, the response spontaneously recovers to the prehabituation level. Furthermore, by stimulating the system beyond t(hab) , we also observe an increase in the amount of time to achieve complete recovery (hallmark 3). We define this recovery period t(recovery) as the first time required to have a response with a relative strength not greater than 1% with respect to the one at the first stimulus, i.e.,

In Fig. 2d, we show that the recovery period increase by ~ 5% as a consequence of this subliminal accumulation.

Within the same setting, in Fig. 2e-g we applied stimuli of different strengths 〈Hmax to study the sensitivity to input intensity (hallmark 4). When normalized by the initial response 〈H(in) = 〈U⟩(t1), less intense stimuli result in stronger response decrements (see Fig. 2e). At the same time, as expected, the absolute value of the initial response increases instead (see Fig. 2f). Hallmark 4 is clearly captured by Fig. 2g, where we quantify the decrease of the normalized total habituation level, Ɗ〈U⟩ = 〈U⟩(t(hab)) − 〈U(in), when exposed to increasing 〈Hmax. The last feature (hallmark 5) is reported in Fig. 2h, where we keep the duration of the stimulus Ts fixed while changing the inter-stimuli interval ΔT. By showing the responses up to the habituation time, we clearly notice that more frequent stimulation is associated with a more rapid and more pronounced response decrement.

Summarizing, despite its simplicity and lack of biological details, our model encompasses the minimal ingredients to capture the main hallmarks defining habituation.

Information from habituation

In our architecture, habituation emerges due to the increase in the storage population, which provides increasing negative feedback to the receptor and thus lowers the number of active readout units 〈U⟩ (t). Crucially, by solving the master equation in Eq.(5), we can also study the evolution of the full probability distribution pU,S,H(t). This approach allows us to quantify how the system encodes information on the environment H through its readout population and how it changes during habituation. To this end, we introduce the mutual information between U and H at time t (see Methods):

where H[p](t) is the Shannon entropy of the probability distribution p, and pU|H denotes the conditional probability distribution of U given H. IU,H measures information in terms of statistical dependencies, i.e., of how factorizable the joint probability distribution pU,H is. It vanishes if and only if U and H are independent. Notably, the mutual information coincides with the entropy increase of the readout distribution:

where 𝕊SU is the change in entropy of the readout population due to repeated measurements of the signal [34].

As in the previous section, we considered a switching signal with 〈Hmax = Href, the typical environmental stimulus strength. In Fig. 3a-b, we plot the mutual information at the first signal, , and when the system has habituated, , as a function of β and σ. Crucially, we find that there exist parameters for which , is larger than . This result suggests that the information on H encoded by U in the habituated system is larger than the initial one. We can quantify this effect by introducing the mutual information gain

In Fig. 3c, we show that ΔIU, H displays a peak in an intermediate region of the (β, σ) plane. In this region, the corresponding habituation strength

attains intermediate values, suggesting that too strong habituation can be detrimental (Fig. 3d). This behavior is tightly related to the presence of the storage S , which acts as an information reservoir for the system. To rationalize this feature we introduce the feedback information

quantifying how much the simultaneous knowledge of U and S increases information compared to U alone. Indeed, the change in feedback information after habituation, ΔΔIf = , peaks in the same region of ΔIU, H (Fig. 3e).

Information and thermodynamics of the model during repeated external stimulation, as a function of the inverse temperature β and the energetic cost of storage σ.

(a-b) The mutual information between readout population and external signal at the first stimulus, , is typically lower than the one when the system has habituated, . (c) The change in the mutual information, ΔIU,H, displays a peak in a region of the (β, σ) space, where the system exhibits optimal information gain during habituation. (d) This region corresponds to intermediate habituation strength, as measured by Δ〈U⟩. (e) The corresponding increase in the feedback information ΔIf indicates that storage is fostering the gain in ΔIU, H. (f) Habituation promotes a decrease of the internal energy flux ΔJint, suggesting a synergistic energetic advantage of habituation. (g-h) From the dynamical point of view, in the region of maximal information gain (β = 3, σ = 0.6) the average number of readout units, 〈U⟩, decreases over time, while the average storage population, 〈S⟩, increases. (i-j) Similarly, both the information encoded on H by the readout, IU,H, and the feedback information, ΔIf, increase upon repeated stimulations. (k) The absolute value of the internal energy flux, |Jint|, decreases upon stimulations, while increasing for repeated pauses when the system moves downhill in energy. Model parameters are as specified in the Methods, 〈Hmin = 0.1, and 〈Hmax = Href = 10.

For small σ we find that ΔΔIf may become negative, indicating that a too strong storage production may ultimately impede the information-theoretic performances of the system. Moreover, producing storage molecules requires energy. We can compute the internal energy flux associated with the storage of information through S as

which is the total energy flux to produce the internal populations (U and S), since U always reaches equilibrium, being the fastest species at play. Its change during habituation is defined as ΔJint = . In Fig. 3f, we show that ΔJint is typically smaller than zero, hinting at a synergistic thermodynamic advantage of habituation.

In Fig. 3g-k, we show the evolution of the system for values of (β, σ) that lie in the region of maximal information gain. The readout activity decreases in time (Fig. 3g), due to the habituation driven by the increase of 〈S⟩ (Fig. 3h). In this region, both IU,H and ΔIf increase over time (Fig. 3i-j; we note that the increase in IU,H is concomitant to a reduction of the population that is encoding the signal. Although this may seem surprising, we stress that the mean of U is not directly related to the factorizability of the joint distribution pU,H). Finally, in Fig. 3k we show that the absolute value of the internal energy flux |Jint| in the presence of the stimulus sharply decreases as well, while increasing during its pauses (the value of Jint is negative in the presence of the background signal since the system is moving downhill in energy). This behavior is due to the interplay between storage and readout populations during habituation and signals the fact that the system requires progressively less energy to respond as time passes, while also moving less downhill in energy when the stimulus is paused. This observation suggests that the regime of maximal information gain supports habituation with a concurrent energetic advantage.

Optimality at the onset of habituation and dependence on the external signal strength.

(a-b) Contour plots in the (β, σ) plane of the stationary mutual information , and the total dissipation of the system per unit energy, , in the presence of a constant signal 〈H⟩ = Href = 10. For a given value of β, the system can optimize σ to the Pareto front (black line) to simultaneously minimize energy consumption and maximize information. Below the front, the system exploits the available energy suboptimally, while the region above the front is physically inaccessible. (b) In the presence of a dynamical input switching between 〈Hmin = 0.1 and 〈Hmax = Href, the parameters defining the optimal front capture the region of maximal information gain corresponding to the onset of habituation, where Δ〈U⟩ starts to be significantly smaller than zero. The gray area enclosed by the dashed vertical lines indicates the location of the Pareto front for values of β [3 – 3.5]. (c) The Pareto front depends on the strength of the external signal 〈Hmax. In particular, for 〈Hmax < Href, at fixed β a larger storage cost σ is needed. Conversely, for 〈Hmax > Href, an optimal system can harvest more information by producing more storage, thus exhibiting a smaller σ. (d) If we allow the system to adapt its inhibition strength κ to the stimulus (Eq. (15)), the Pareto fronts for different external signals collapse into a single optimal curve. Model parameters are specified in the Methods.

The onset of habituation and its functional role

As habituation, information, and their energetic cost appear to be tightly related, we now investigate whether the region of maximal information gain can be retrieved by means of an a priori optimization principle. To do so, we first focus on the case of a constant environment. We assume that the system can tune its internal parameters to optimally respond to the statistics of a prolonged external signal. Thus, we consider a fixed input statistics given by , with Hst the average signal strength.

When the system reaches its steady state, we compute the information that the readout has on the signal, , (Fig. 4a) and the total energy consumption. To this end, we must take into account two terms. First, the energy flux in Eq. (13) represents the rate of change in energy due to the driven storage production. The energy consumption associated with this process per unit energy is . Second, the inhibition pathway is also driving the receptor out of equilibrium, leading to a dissipation per unit temperature given by

We plot the total energy consumption per unit energy in Fig. 4a. In order to understand how the system may achieve large values of mutual information while minimizing its intrinsic dissipation, we can maximize the Pareto functional [69, 70]:

where γ ∈ [0,1] sets the strategy implemented by the system. If γ ≪ 1, the system prioritizes minimizing dissipation, whereas if γ ≈ 1 it acts to preferentially maximize information. The set of (β, σ) that maximize Eq. (14) defines a Pareto optimal front in the space (Fig. 4a). At fixed energy consumption, this front represents the maximum information between the readout and the external input that can be achieved. The region below the front is therefore suboptimal. Instead, the points above the front are inaccessible, as higher values of cannot be attained without increasing . We note that, since β usually cannot be directly controlled by the system, the Pareto front indicates the optimal β to which the system tunes at fixed B (see Methods and Supplementary Information for details).

We now consider once more a system receiving a dynamically switching signal with 〈Hmax = Hst. We first focus on the case Href = Hst , with Href the reference signal appearing in Eq. (S28). Remarkably, we find that the Pareto optimal front in the (β, ς) plane qualitatively corresponds to the region of maximal information gain, as we show in Fig. 4b. This implies that a system that has tuned its internal parameters to respond to a constant signal also learns how to respond optimally to the timevarying input of the same strength, in terms of information gain. Since the region identified by the front leads to intermediate values of Δ〈U⟩, it corresponds to the “onset of habituation”, where the system decreases its response enough to reduce the energy dissipation while storing information to increase IU,H. Heuristically, the onset of habituation emerges spontaneously when the system attempts to activate its receptor as little as possible, while producing the minimum amount of storage molecules retaining enough information about the external environment.

In Fig. 4c, we then study what happens to the optimal front if 〈Hmax is larger or smaller than the reference signal. We find that, at low 〈Hmax , the Pareto front moves in such a way that a larger storage cost σ is needed at fixed β. This is expected since at lower signal strengths it is harder for the system to distinguish the input from the background thermal noise. Conversely, when 〈Hmax > Href, an optimal system needs to reduce σ to produce more storage and harvest information. Importantly, we find that if 〈Hmax remains close to Href, the optimal front remains close to the onset of habituation and thus lies within the region of maximal information gain.

However, we can achieve a collapse of the optimal front if we allow the system to tune the inhibition strength κ to the value of the external signal, i.e.,

In this way, a stronger input will correspond to a larger κ, and thus a stronger inhibition. In Fig. 4d, we show that the Pareto fronts obtained with this choice collapse into a single curve. Crucially, this front still corresponds to the region of maximal information gain, although the specific values of ΔIU,H naturally depend 〈Hmax (see Supplementary Information). Thus, in this scenario, a system that is capable of adapting the negative feedback to its environment is also able to always tune itself to the onset of habituation at different values of the external stimulus and without tinkering with the energy cost σ, where its responses are optimal from an information-theoretic perspective.

The role of information storage

The presence of a storage mechanism is fundamental in our model. Furthermore, its role in mediating the negative feedback is suggested by several experimental and theoretical observations [9, 2933]. Whenever the storage is eliminated from our model, habituation cannot take place, highlighting its key role in driving the observed dynamics (see Supplementary Information).

In Figure 5a, we show that the degree of habituation, Δ〈U⟩, and the change in the storage population, Δ〈S⟩, are deeply related to one another. The more 〈S⟩ relaxes between two consecutive signals, the less the readout population reduces its activity. This ascribes to the storage population the role of an effective memory and highlights its dynamical importance for habituation. Moreover, the dependence of the storage dynamics on the interval between consecutive signals, ΔT, influences information gain as well. Indeed, increasing ΔT, we observe a decrease of the mutual information (Fig. 5b) on the next stimulus. In the Supplementary Information, we further analyze the impact of different signal and pause durations.

We remark here that the proposed model is fully Markovian in its microscopic components, and the memory that governs readout habituation spontaneously emerges from the interplay among the internal timescales. In particular, recent works have highlighted that the storage needs to evolve on a slower timescale, comparable to that of the external input, in order to generate information in the receptor and in the readout [67]. To strengthen our conclusions, we remark that an instantaneous negative feedback implemented directly by U (bypassing the storage mechanism) would lead to no time-dependent modulations of the readout and thus no habituation (see Supplementary Information). Similarly, a readout population evolving on a timescale comparable to that of the signal cannot effectively mediate the negative feedback on the receptor since its population increase would not lead to habituation (see Supplementary Information). Thus, negative feedback has to be implemented by a separate degree of freedom evolving on a timescale which is slow and comparable to that of external signal.

The role of memory in shaping habituation.

(a) The system response depends on the waiting time ΔT between two external signals. As ΔT increases, the storage decays, and thus memory is lost (green). Consequently, the habituation of the readout population decreases (yellow). (b) As a consequence, the information IU,H that the system has on the signal H when the new stimulus arrives decays as well. Model parameters for this figure are β = 2.5, σ = 0.5 in the unit measure of the energy, and as specified in the Methods.

Minimal features of neural habituation

In neural systems, habituation is typically measured as a progressive reduction of the stimulus-driven neuronal firing rate [14, 15, 53, 54, 59]. To test whether our minimal model can be used to capture the typical neural habituation dynamics, we measured the response of zebrafish larvae to repeated looming stimulations via volumetric multiphoton imaging [71]. From a whole-brain recording of ≈ 55000 neurons, we extracted a subpopulation of ≈ 2400 neurons in the optic tectum with a temporal activity profile that is most correlated with the stimulation protocol (see Methods).

Our model can be extended to qualitatively reproduce some features of the progressive decrease in neuronal response amplitudes. We identify a single readout unit with a subpopulation of binary neurons. Then, a fraction of neurons are randomly turned on each time the corresponding readout unit is activated (see Methods). We tune the model parameters to have a comparable number of total active neurons at the first stimulus with respect to the experimental setting. Moreover, we set the pause and signal durations in line with the typical timescales of the looming stimulation. We choose the model parameters β and σ in such a way that the system operates close to the peak of information gain, with an activity decrease over time that is comparable to the activity decrease in experimental data (see Supplementary Information). In this way, we can focus on the effects of storage and feedback mechanisms without modeling further biological details.

Habituation in zebrafish larvae.

(a) Normalized neural activity profile in a zebrafish larva in response to the repeated presentation of visual (looming) stimulation, and comparison with the fraction of active neurons 〈Nact⟩ = Nact/N in our model with stochastic neural activation (see Methods). Stimuli are indicated with colored dots from blue to red as time increases. (b) PCA of experimental data reveals that habituation is captured mostly by the second principal component, while features of the evoked neural response are captured by the first one. Different colors indicate responses to different stimuli. (c) PCA of simulated neural activations. Although we cannot capture the dynamics of the evoked neural response with a switching input, the core features of habituation are correctly captured along the second principal component. Model parameters are β = 4.5, σ = 0.15 in energy units, and as in the Methods, so that the system is tuned to the onset of habituation.

The patterns of the model-generated activity are remarkably similar to the experimental ones (see Fig. 6a). We performed a 2-dimensional embedding of the the neural activity profiles of all recorded neurons via PCA (explained variance ≈ 70≈) and we plot the temporal evolution in this low-dimensional space (Fig. 6b). This procedure reveals that the first principal component (PC) accounts for the evoked neural response, while the second PC mostly reflects the habituation dynamics. We perform the same analysis on data generated from the model as explained above. As we see in Fig. 6c, the second PC encodes habituation, as in experimental data, although the neural response in the first PC is replaced by the switching on/off dynamics of the input. This shows that our model is able to capture the main features of the observed neural habituation, without the need for biological details.

Discussion

In this work, we studied a minimal architecture that serves as a microscopic and archetypal description of sensing processes across biological scales. Informed by theoretical and experimental observations, we focused on three fundamental mechanisms: a receptor, a readout population, and a storage mechanism that drives negative feedback. Despite its simplicity, we have shown that our model robustly reproduces the hallmarks associated with habituation in the presence of a single type of repeated stimulation, a widespread phenomenon in both biochemical and neural systems. By quantifying the mutual information between the external signal and readout population, we identified a regime of optimal information gain during habituation. Remarkably, the system can spontaneously tune to this region of parameters if it enforces an information-dissipation trade-off. In particular, optimal systems lie at the onset of habituation, characterized by intermediate levels of activity reduction, as both too-strong and too-weak negative feedback are detrimental to information gain. Finally, we found that, by allowing for a storage inhibition strength that can adapt to the environmental signal, this optimality is input-independent and requires no further adjustment of other internal model parameters. Our results suggest that the functional advantages of the onset of habituation are rooted in the interplay between energy dissipation and information gain, and its general features are tightly linked to the internal mechanisms to store information.

Although minimal, our model can capture basic features of neural habituation, where it is generally accepted that inhibitory feedback mechanisms modulate the stimulus weight [55]. Remarkably, recent works reported the existence of a separate inhibitory neuronal population whose activity increases during habituation [15]. Our model suggests that this population might play the role of a storage mechanism, allowing the system to habituate to repeated signals. However, in neural systems, a prominent role in encoding both short- and long-term information is also played by synaptic plasticity [72, 73] as well as by memory molecules [4244], at a biochemical level. A comprehensive analysis of how information is encoded and retrieved will most likely require all these mechanisms at once. Including an explicit connectivity structure with synaptic updates in our model may help in this direction, at the price of analytical tractability. Furthermore, future works may be able to compare our theoretical predictions with experiments in which the modulation of frequency [15] and intensity of stimulation trigger the observed hallmarks. In this way, we could elucidate the roles and features of internal processes characterizing the system under investigation, along with its information-theoretic performance. Overall, the present results hint at the fact that our minimal architecture may provide crucial insights into the functional advantages of habituation in a wide range of biological systems.

Extensions of these ideas are manifold. The definition of a habituated system relies, in this work as well as in other studies [57], on the definition of a response threshold. However, some of the hallmarks might disappear when habituation is defined as a phenomenon appearing in a time-periodic steady state. To overcome this issue, it may be necessary to extend the model to more realistic molecular schemes encompassing the presence of additional storage mechanisms. More generally, understanding the information-theoretic performance of real-world biochemical networks exhibiting habituation remains a fascinating perspective to explore. Upon these premises, the possibility of inferring the underlying biochemical structure from observed behaviors is a fascinating direction [51]. Furthermore, since we focused on repetitions of statistically identical signals, it will be fundamental to characterize the system’s response to diverse environments [74]. To this end, incorporating multiple receptors or storage populations may be needed to harvest information in complex conditions. In such scenarios, correlations between external signals may help reduce the encoding effort as, intuitively, S is acting as an information reservoir for the system. Moreover, such stored information could be used to make predictions on future stimuli and behavior [6062]. Indeed, living systems do not passively read external signals but often act upon the environment. We believe that both storage mechanisms and their associated negative feedback will remain core modeling ingredients.

Our work paves the way to understanding how information is encoded and guides learning, predictions, and decision-making, a paramount question in many fields. On the one hand, it encapsulates key ingredients to support habituation while still being minimal enough to allow for analytical treatment. On the other hand, it may help the experimental quest for signatures of these physical ingredients in a variety of systems. Ultimately, our results show how habituation - a ubiquitous phenomenon taking place at strikingly different biological scales - may stem from an information-based advantage, shedding light on the optimization principle underlying its emergence and relevance for any biological system.

Methods

Model parameters

In this section, we briefly recall the free parameters of the model and the values we use in numerical simulations, unless otherwise specified. In particular, the energetic barrier (Vcr) fixes the average values of the readout population both in the passive and active state, namely 〈UP = e−βV and 〈UA = e−β(V−c) (see Eq. (3)). Thus, we can fix hUi〈UA and 〈UA in lieu of V and c. Similarly, as in Eq. (S28), we can set the inhibiting storage fraction α to fix κ. At any rate, we remark that the emerging features of the model are qualitatively independent of the specific choice of these parameters. Furthermore, we typically consider the average of the exponentially distributed signal to be 〈Hmax = 10 and 〈Hmin = 0.1 (see Supplementary Information for details). Overall, we are left with β and σ as free parameters. β quantifies the amount of thermal noise in the system, and at small β the thermal activation of the receptor hinders the effect of the signal and makes the system almost unable to process information. Conversely, if β is high, the system must overcome large thermal inertia, increasing the dissipative cost. In this regime of weak thermal noise, we expect that, given a sufficient amount of energy, the system can effectively process information. In Table I, we summarize the specific parameter values we used throughout the main text. Other values to explore the robustness of the model are discussed in the Supplementary Information.

Timescale separation

We solve our system in a timescale separation framework [67, 75, 76], where the storage evolves on a timescale that is much slower than all the other internal ones, i.e.,

The fact that τS is the slowest timescale at play is crucial to making these components act as an information reservoir. This assumption is also compatible with biological examples. The main difficulty arises from the presence of the feedback, i.e. the signal influences the receptor and thus the readout population, which in turn impacts the storage population and finally changes the deactivation rate of the receptor - schematically, HRUSR, but the causal order does not reflect the temporal one.

Summary of the model parameters and the values used for numerical simulations, unless otherwise specified. The parameters ft and a qualitatively determine the behavior of the model and are varied throughout the main text.

We start with the master equation for the propagator P (u, r, s, h, t|u0, r0, s0, h0, t0),

We rescale the time by τS and introduce two small parameters to control the timescale separation analysis, ϵ = τU/τR and δ = τR/τH. Since τS/τH = O(1), we set it to 1 without loss of generality. We then write P = P(0) + ϵP(1) and expand the master equation to find , with . We obtain that n obeys the following equation:

Yet again, Π = Π(0) + δΠ(1) allows us to write Π(0) = at order O(δ−1), where . Expanding first in ϵ and then in δ sets a hierarchy among timescales. Crucially, due to the feedback present in the system, we cannot solve the next order explicitly to find F. Indeed, after a marginalization over r, we find tF = [ŴH + ŴS (ū(s, h))] F, at order O(1), where ū(s, h) = . Hence, the evolution operator for F depends manifestly on s, and the equation cannot be self-consistently solved. To tackle the problem, we first discretize time, considering a small interval, i.e., t = t0 + Δt with Δtτu and thus ū(s, h)u0. We thus find F(s, h,t|s0, h0, t0) = P(s, t|s0, t0)PH(h, t|h0, t0) in the domain t ∈ [t0, t0 + Δt], since H evolves independently from the system (see also Supplementary Information for analytical steps).

Iterating the procedure for multiple time steps, we end up with a recursive equation for the joint probability pU,R,S,H (u, r, s, h, t0 + Δt). We are interested in the following marginalization

where P(s, ts,t + Δt) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the timescale separation approximation. Notice that this solution requires the knowledge of pU,S at the previous time-step and it has to be solved iteratively.

Explicit solution for the storage propagator

To find a numerical solution to our system, we first need to compute the propagator P(s0, t0s,t). Formally, we have to solve the master equation

where we used the shorthand notation P(s0s) = (s0, t0s, t). Since our formula has to be iterated for small timesteps, i.e., tt0 = Δt ≪ 1, we can write the propagator as follows

where wv and λv are respectively eigenvectors and eigenvalues of the transition matrix ŴS (u0),

and the coefficients a(v) are such that

Since eigenvalues and eigenvectors of ŴS(u0) might be computationally expensive to find, we employ another simplification. As Δt → 0, we can restrict the matrix only to jumps to the n-th nearest neighbors of the initial state (s0, t0), assuming that all other states are left unchanged in small time intervals. We take n = 2 and check the accuracy of this approximation against the full simulation for a limited number of timesteps.

Mean-field relationship

We note that hUi and hSi satisfies the following mean-field relationship:

where f0(x) is an analytical function of its argument (see Supplementary Information). Eq. (S1) clearly states that only the fraction of active storage units is relevant to determining the habituation dynamics.

Mutual information

Once we have pU (u, t) (obtained marginalizing pU,S over s) for a given pH (h, t), we can compute the mutual information

where H is the Shannon entropy. For the sake of simplicity, we consider that the external signal follows an exponential distribution pH(h,t) = λ(t)e−λ(t)h. Notice that, in order to determine such quantity, we need the conditional probability pU|H(u, t). In the Supplementary Information, we show how all the necessary joint and conditional probability distributions can be computed from the dynamical evolution derived above.

We also highlight here that the timescale separation implies IS,H = 0, since

Although it may seem surprising, this is a direct consequence of the fact that S is only influenced by H through the stationary state of U. Crucially, the presence of the feedback is still fundamental in promoting habituation. Indeed, we can always write the mutual information between the signal H and both the readout U and the storage S together as I(U,S),H = ΔIf + IU,H, where ΔIf = I(U,S),HIU,H = I(U,H), SIU,S. Since ΔIf > 0 (by standard information-theoretic inequalities), the storage is increasing the information of the two populations together on the external signal. Overall, although S and H are independent in this limit, the feedback is paramount in shaping how the system responds to the external signal and stores information about it.

Pareto optimization

We perform a Pareto optimization at stationarity in the presence of a prolonged stimulation. We seek the optimal values of (β, σ) by maximizing the functional in Eq. (14) of the main text. Hence, we maximize the information between the readout and the signal, simultaneously minimizing the dissipation of the receptor induced by both the signal and feedback process and the dissipation associated with storage production, as discussed in the main text. The dissipative contributions have been computed per unit energy to be comparable with the mutual information. In the Supplementary Information, we detailed the derivation of the Pareto front and investigate the robustness of this optimization strategy.

Recording of whole brain neuronal activity in zebrafish larvae

Acquisitions of the zebrafish brain activity were carried out in one Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28°C on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). The subject was embedded in 2 percent agarose gel and brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Data were acquired at 30 frames per second covering an effective field of view of about 450 × 900μm with a resolution of 512 × 1024 pixels. The volumetric module acquires a volume of about 180 – 200μm in thickness encompassing 30 planes separated by about 7μm, at a rate of 1 volume per second, sufficient to track the slow dynamics associated with the fluorescence-based activity reporter GCaMP6s. Visual stimulation was presented in the form of a looming stimulus with 150s intervals, centered with the fish eye (see Supplementary Information). Neurons identification and anatomical registrations were performed as described in [71].

Data analysis

The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a 3s rectangular window, and automatic segmentation. The obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 55000 cells with a linear regression model using a set of base functions representing the expected responses to each stimulation event. These base functions have been obtained by convolving the exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterizing the presentation of the corresponding visual stimulus. The resulting score coefficients of the fit were used to extract the cells whose score fell within the top 5% of the distribution, resulting in a population of ≈ 2400 neurons whose temporal activity profile correlates most with the stimulation protocol. The resulting fluorescence signals F(i) were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [77]. See Supplementary Information.

Model for neural activity

Here, we describe how our framework is modified to mimic neural activity. Each readout unit, u, is interpreted as a population of N neurons, i.e., a region dedicated to the sensing of a specific input. When a readout population is activated at time t, each of its N neurons fires with a probability p. We set N = 20 and p = 0.5. N has been set to have the same number of observed neurons in data and simulations, while p only controls the dispersal of the points in Fig. 6c, thus not altering the main message. The dynamics of each readout unit follows our dynamical model. Due to habituation, some of the readout units activated by the first stimulus will not be activated by subsequent stimuli. Although the evoked neural response cannot be captured by this extremely simple model, its archetypal ingredients (dissipation, storage, and feedback) are informative enough to reproduce the low-dimensional habituation dynamics found in experimental data.

Supplementary Information

S1. Detailed Solution of the Master Equation

Consider the transition rates introduced in the main text:

We set a reflective boundary for the storage at s = NS, corresponding to the maximum amount of storage molecules in the system. Moreover, for the sake of simplicity, we take . Retracing the steps of the Methods, the master equation governing the evolution of the propagator of all variables, P(u, r, s, h, t|u0, r0, s0, h0, t0), is:

We solve this equation employing a timescale separation, i.e., τUτRτS ~ τH, where for X = U, R, S and τH is the typical timescale of the signal dynamics. Motivated by several biological examples, we assumed that the readout population undergoes the fastest dynamics, while storage and signal evolution are the slowest ones. Defining = τU/τR and δ = τR/τH, and setting τS/τH = 1 without loss of generality, we have:

We propose a solution in the following form, P = P(0) + ∊P(1). By inserting this expression in the equation above, and solving order by order in , at order −1, we have that:

where pst solves the master equation for the readout evolution at a fixed r:

with α(r) = e−α(V − cr). Hence,

At order 0, we find the equation for Π, also reported in the Methods:

To solve this equation, we propose a solution of the form Π = Π(0) + δΠ(1). Hence, again, at order δ−1, we have that , where satisfy the steady-state equation for the fastest degree of freedom, with all the others fixed. In the case, it is just the solution of the rate equation for the receptor:

where , and the same for the reverse reaction. At order δ−1, we have an equation for F:

As already explained in the Methods, due to the feedback, this equation cannot be solved explicitly. Indeed, the operator governing the evolution of F is:

with and using the linearity of ŴS(u). In order to solve this equation, we shall assume that ū(s, h) = u0, bearing in mind that this approximation holds if t is small enough, i.e., t = t0 + Δt with Δtτu. Therefore, for a small interval, we have:

Overall, we end up with the following joint probability of the model at time t0 + Δt:

where ∫ dh0PH(h, t0 + Δt|h0, t0)pU,S,H(u0, s0, h0, t0) = pU,S(u0, s0, t0)pH(h, t0 + Δt) since H at time t0 + Δt is independent of S and U. When propagating the evolution through intervals of duration Δt, we also assume that H evolves independently since it is an external variable, while affecting the evolution of the other degrees of freedom. This structure reflects into the equation above. For simplicity, we prescribe pH(h, t) to be an exponential distribution, pH(h, t) = λ(t)e−λ(t)h, and solve iteratively Eq. (S12) from t0 to a given T in steps of duration Δt, as indicated above. This complex iterative solution arises from the timescale separation because of the cyclic feedback structure: {S, H} → RUS. This solution corresponds explicitly to

where P(s, ts, t + Δt) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the time-scale separation approximation. Notice that this solution requires the knowledge of pU,S at the previous time-step and it has to be solved iteratively. Both pU and pS can be obtained by an immediate marginalization.

As detailed in the Methods, the propagator P(s0, t0s, t), when restricted to small time intervals, can be obtained by solving the birth-and-death process for storage molecules at fixed readout, limiting the state space only to n nearest neighbors (we checked that our results are robust increasing n for the selected simulation time step).

S2. Information-Theoretic Quantities

By direct marginalization of Eq. (S13), we obtain the evolution of pU (u, t) and pS (s, t) for a given pH (h, t). Hence, we can compute the mutual information as follows:

where H[pX] is the Shannon entropy of X, and Δ𝕊U is the reduction in the entropy of U due to repeated measurement (see main text). Notice that, in order to determine such quantity, we need the conditional probability pU|H(u, t). This distribution represent the probability that, at a given time, the system jumps at a value u in the presence of a given signal h. In order to compute it, we can write

by definition. The only dependence on h enters in through the eβh dependence in the rates.

Analogously, all the other mutual information can be obtained. As we showed in the Methods, although IS,H = 0 due to the time-scale separation, the presence of the feedback is still fundamental to effectively process information about the signal. This effect can be quantified through the feedback information ΔIf = I(U,S), HIU,H > 0, as it captures how much the knowledge of S and U together helps to encode information about the signal with respect to U alone. In terms of system entropy, we equivalently have:

that highlights how much the effect of S (feedback) reduces the entropy of the system due to repeated measurements. In practice, in order to evaluate I(U,S),H, we exploit the following equality:

for which we need pU,S|H. It can be obtained by noting that

from which we immediately see that

that can be easily computed at any given time t.

S3. Mean-Field Relation between Average Readout and Storage

Fixing all model parameters, the average value of storage, 〈S⟩, and readout, 〈U⟩, is numerically determined by solving iteratively the system, as shown above. However, an analytical relation between these two quantities can be found starting from the definition of 〈U⟩:

Then, inserting the expression for the stationary probability that we know analytically:

where has a complicated expression involving the hypergeometric function 2F1 in terms of model parameters and only the fraction of active S, ρS = s/NS (the explicit derivation of this formula is not shown here). Then, we have:

Since we do not have an analytical expression for , we employ the mean-field approximation, reducing all the correlation functions to products of averages:

where . This clearly shows that, given a set of model parameters, 〈U⟩ and the average fraction of storage molecules, are related. In particular, introducing the change of parameters presented in the Methods, we have the following collapse:

where 〈UA and 〈UP are respectively the average of U fixing r = 1 (active receptor) and r = 0 (passive receptor). It is also possible to perform an expansion of f0 which numerically results to be very precise:

where . Since all these relations just depend on the average fraction of storage molecules, it is natural to ask what happens when . Fixing all the remaining parameters, both 〈U⟩ and will change, still satisfying the mutual relation presented above. Let us consider, for , the stationary solution that has the same fraction of S, i.e., . As a consequence of the scaling relation, . Considering 〈U⟩ ≈ 0 in both settings, we can ask ourselves what is the factor γ such that . Since u only enters linearly in the dynamics of the storage, and the mutual relation only depends on the fraction of active S, we guess that γ = 1/n, as numerically observed. As stated in the main text, we can finally conclude that the storage fraction is the most relevant quantity in our model to determine the effect of the feedback and characterize the dynamical evolution. This observation makes our conclusions more robust, as they do not depend on the specific choice for the storage reservoir since there always exists a scaling relation connecting 〈U⟩ and . As such, changing the value of the model parameters we fixed, will only affect the number of active molecules without modifying the main results presented in this work.

S4. Model Features and Robustness of Optimality

In this section, we show how different choices of model parameters and the external signal features impact the results presented in the main text. In Table S2 we summarize for convenience the parameters of the model. We recall that, for analytical ease, we take the environment to be an exponentially distributed signal,

where λ is its inverse characteristic scale. In particular, we describe the case in which no signal is present by setting λ to be large, so that the typical realizations of H would be too small to activate the receptors. On the other hand, when λ is small, the values of h appearing in the rates of the model are large enough to activate the receptor and thus allow the system to sense the signal. In the dynamical case, we take λ(t) to be a square wave, so that 〈H⟩ = 1/λ alternates between two values 〈Hmin - the input signal - and 〈Hmax - the background. We denote with Ton the duration of 〈Hmax, and with ΄T the one of 〈Hmin, i.e., the pause between two subsequent signals. In practice, this mimics an on-off dynamics, where the stochastic signal is present when its average is 〈Hmax.

Summary of the model parameters and the values used for numerical simulations, unless otherwise specified. The parameters β and σ qualitatively determine the behavior of the model and are varied.

Effects of the external signal strength and thermal noise level on sensing.

(a) At fixed σ = 0.1 and constant 〈H⟩, the system captures less information as 〈H⟩ decreases and it needs to operate at high β to sense the signal. In particular, as β increases, IU,H becomes larger. (b) In the dynamical case, outside the optimal curve (black dashed line), at high β and high σ, storage is not produced and no negative feedback is present. The system does not display habituation, and IU,H is smaller than on the optimal curve. (c) In the opposite regime, at low β and σ, the system is dominated by thermal noise. As a consequence, the average readout 〈U⟩ is high even when the external signal is not present (〈H⟩ = 〈Hmin = 0.1), and it captures only a small amount of information IU,H, which is masked by thermal activation. Simulation parameters are as in Table S2.

1. Effects of the external signal strength and thermal noise level

In Figure S7a, we study the behavior of the model in the presence of a static exponential signal, with average 〈H⟩. We focus on the case of low σ, so that the production of storage is favored. As 〈H⟩ decreases, IU,H decreases as well. Hence, as expected, information acquired through sensing depends on the strength of the external signal that coincides with the energy input driving receptor activation. However, the system does not display for all parameters an emergent information dynamics, memory, and habituation. In Figure S7b, we see that, when the temperature is low but σ is high, the system does not show habituation and ΔIU,H = 0. On the other hand, when thermal noise dominates (Figure S7c), even when the external signal is small, the system produces a large readout population due to random thermal activation. As a consequence, these random activations hinder the signal-driven ones, thus the system does not effectively sense the external signal even when present and IU,H is always small. It is important to remind here that, as we see in the main text, IU,H is not monotonic at fixed σ and as a function of β. This is due to the fact that low temperatures typically favor sensing and habituation, but they also intrinsically suppress readout production. Thus, at high β, σ needs to be small to effectively store information since thermal noise is negligible. Vice versa, a small σ is detrimental at high temperatures since the system produces storage as a consequence of thermal noise. This complex interplay is captured by the Pareto optimization, which gives us an effective relation between β and σ to maximize storage while minimizing dissipation.

2. Stationary behavior of the model with a constant signal

In this section, we detail the behavior of the model when exposed to a static signal. As in the main text, we take

with 〈H⟩ = 1/λst = Hst.

We first consider the case where the system does not adapt its inhibition strength κ, i.e., we set

where Href is the reference signal, and α the fraction of storage population needed to inhibit the receptor on average (see Table S2). In Figure S8 we plot as a function of α and σ the behavior of the stationary average readout population 〈Ust, the average storage population 〈Sst, the mutual information between readout and the signal , and the total energy consumption , where

with . As shown in the main text, however, we can achieve a collapse of the Pareto fronts at different external signals if we allow the system to tune the inhibition strength as

so that a stronger input will correspond to a larger κ, and thus a stronger inhibition. In Figure S9 we show the behavior of the same stationary quantities in this case, and for a large range of Hst.

Behavior of the stationary average readout population 〈Ust, the average storage population 〈Sst, the mutual information between readout and the signal , and the total energy consumption , as a function of β and σ and in the presence of a static signal with average Hst. The value of κ is fixed by a reference signal as in Eq. (S28). The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

Behavior of the stationary average readout population 〈Ust, the average storage population 〈Sst, the mutual information between readout and the signal , and the total energy consumption , as a function of β and σ and in the presence of a static signal with average Hst. The value of κ is tuned so that it follows the average value of the external signal, Eq. (S30). The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

3. Static and dynamical optimality

We now study the dynamical behavior of the model under a repeated external signal, for different values of 〈Hmax. In particular, given an observable O, we define its change under a repeated signal, ΔO, as the difference between the maximal response to the signal after several repetitions, once the system has habituated, and the maximal response to the first signal. In Figure S10b we plot, as a function of β and σ, the mutual information gain ΔIU,H, the feedback information gain ΔΔIf, the habituation strength Δ〈U⟩, and the change in the internal energy flux ΔJint, when as before κ is fixed by a reference signal Href. As in the main text, we see in particular that Δ〈U⟩ is maximal in the region where the change in the mutual information ΔIU,H and the feedback information ΔΔIf are both small, suggesting that a strong habituation fueled by a large number of storage molecules with low energy cost is ultimately detrimental for information processing. Furthermore, in this region the change in the internal energy flux, Jint, is large. For completeness, in Figure S11 we plot all relevant dynamical quantities at different signal strength 〈Hmax in the case of a fixed κ with a reference signal (Eq. (S28)), whereas in Figure S12 we focus on an adaptive κ (Eq. (S30)).

Dynamical optimality under a repeated external signal.

(a) Schematic definition of how we study the dynamical evolution of relevant observables, by comparing the maximal response to a first signal with the one to a signal after the system has habituated. (b) Behavior of the increase in readout information, ΔIU,H, in feedback information, ΔΔIf, in average readout population, Δ〈U⟩, and in the internal energy flux, ΔJint. The value of κ is fixed by a reference signal as in Eq. (S28). The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

Behavior of the change in average readout population Δ〈U⟩, readout information gain ΔIU,H, change in internal energy flux ΔJint, feedback information gain, ΔΔIf, final readout information after habituation , as a function of σ and a and in the presence of a switching signal with average 〈Hmax. The value of κ is fixed by a reference signal as in Eq. (S28). The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

Behavior of the change in average readout population Δ〈U⟩, readout information gain ΔJU,H, change in internal energy flux ΔJint, feedback information gain, ΔΔIf, final readout information after habituation , as a function of β and σ and in the presence of a switching signal with average 〈Hmax. The value of κ is tuned so that it follows the average value of the external signal as in Eq. (S30). The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

4. Interplay between information storage and signal duration

In the main text and insofar, we have always considered the case Ton = Δt. We now study the effect of the signal duration and the pause length on sensing (Figure S13). If the system only receives short signals between long pauses, the slow storage build-up does not reach a high level of fraction of active molecules. As a consequence, the negative feedback on the receptor is less effective and habituation is suppressed (Figure S13a). Therefore, the peak of ΔIU,H in the (β, σ) plane takes place below the optimal curve, as σ needs to be smaller than in the static case to boost storage production during the brief periods in which the signal is present. On the other hand, in Figure S13b we consider the case of a long signal with short pauses. In this scenario, the slow dynamical evolution of the storage can reach large values of number of molecules at larger values of σ, thus moving the optimal dynamical region slightly above the Pareto-like curve. The case of a short signal is comparable to the durations of the looming stimulations in the experimental setting, which can be used to tune the parameters of the model to the peak of information gain.

Effect of the signal duration on habituation.

(a) If the system only receives the signal for a short time (Ton = 50Δt < ΔT = 200Δt) it does not have enough time to reach a high level of storage molecules. As a consequence, both ΔU and ΔIU,H are smaller, and thus habituation is less effective. (b) If the system receives long signals with brief pauses (Ton = 200Δt > ΔT = 50Δt), instead, the habituation mechanism promotes information storage and thus a reduction in the readout activity. The dashed black line indicates the corresponding Pareto front. Simulation parameters are as in Table S2.

S5. The Necessity of Storage

Here, we discuss in detail the necessity of slow storage implementing the negative feedback to have habituation. We will first investigate the possibility that negative feedback, necessary for any kind of habituative behaviors, is implemented directly through the readout population that undergoes a fast dynamics. We will analytically show that this limit leads to the absence of habituation, hinting at the necessity of having a slow dynamical feedback in the system (Sec. S5 1). Then, we will study the system in the scenario in which U applies the feedback, bypassing the storage S , but it acts as a slow variable. Solving the Master Equation through our iterative numerical method, we show that, also in this case, habituation disappears (Sec. S5 2). These results suggest that not only the feedback must be applied by a slow variable, but that such a slow variable must have a role different from the readout population, in line with recent observations in neural systems [15]. The model proposed in the main text is indeed minimal in this respect, other than compatible with biological examples.

1. Dynamical feedback cannot be implemented by a fast readout

If the storage is directly implemented by the readout population, the transition rates get modified as follows:

At this level, θ is a free parameter playing the same role as κ/NS in the complete model with the storage. We start again from the master equation for the propagator P(u, r, h, t|u0, r0, h0, t0):

where τUτRτR, since we are assuming, as before, that U is the fastest variable. Here, = τU/τR and δ = τR/τH. Notice that now ŴR depends also on u. We can solve the system again by resorting to a timescale separation and scaling the time by the slowest timescale, τH. We have:

We now expand the propagator at first order in , P = P(0) + ∊P(1). Then, the order −1 of the master equation gives, as above, (r, h, t|r0, h0, t0). At order 0, Eq. (S33) leads to

To solve this, we expand the propagator as Π = Π(0) + δΠ(1) and, at order δ−1, we obtain:

This is a 2 × 2 effective matrix acting on Π(0), where the only rate affected by u is , which multiplies the active states, i.e., r = 1. This equation can be analytically computed and the solution of Eq. (S35) is:

with log(Θ) = e−β(V − c) (eβθ − 1). Clearly, does not depend on u since we summed over the fast variable. Going on with the computation, at order δ0 , we obtain:

So that the full propagator results to be:

From this expression, we can find the joint probability distribution, following the same steps as before:

As expected, since U relaxes instantaneously, the feedback is instantaneous as well. As a consequence, the timedependent behavior of the system is solely driven by the external signal H, with a fixed amplitude that takes into account the effect of the feedback only on average. This means that there will be no dynamic reduction of activity and, as such, no habituation in this scenario. This was somehow expected, since all variables are faster than the external signal and, as a consequence, the feedback cannot be implemented over time. The first conclusion is that the variable implementing the feedback has to evolve together with H.

2. Effective dynamical feedback requires an additional population

We now assume that the feedback is, again, implemented by U, but it acts as a slow variable. Formally, we take τRτUτH. Rescaling the time by the slowest timescale, τH (works the same for τU), we have:

with = τR/τH. We now expand the propagator at first order in , P = P(0) + ∊P(1). Then, the order −1 of the master equation is simply ŴRP(0) = 0, whose solution gives . At order 0:

The only dependence on r in ŴU(r). is through the production rate of U. Indeed, the effective transition matrix governing the birth-and-death process of readout molecules is characterized by:

This rate depends only on h, but h evolves in time. Therefore, we should scan all possible (infinite) values that h takes and build an infinite dimensional transition matrix. In order to solve the system, imagine that we are looking at the interval [t0, t0 + Δt]. Then, we can employ the following approximation if ΔtτH:

Using this simplification, we need to solve the following equation:

The explicit solution in the interval t ∈ [t0, t0 + Δt] can be found to be:

with a propagator. The full propagator at time t0 + Δt is then:

Integrating over the initial conditions, we finally obtain:

To numerically integrate this equation, we make two approximations. The first one is that we solve the dynamics in all intervals in which the signal does not evolve, where PH is a delta function peaked at the initial condition. For all time points in which the signal changes, this amounts to considering the signal at the previous instant, a good approximation as long ΔtτH, particularly when the time dependence of the signal is a square wave, as in our case.

The second approximation is to compute the propagator of PU. As explained in the Methods of the main text, we restrict our computation to the transitions between n nearest neighbors in the U space. In the case of transitions only among next-nearest neighbors, we have the following dynamics:

with the transition matrix:

the diagonal is fixed to satisfy the conservation of normalization, as usual. The solution is:

where wv and κv are respectively eigenvectors and eigenvalues of the transition matrix Wnn. The coefficients a(v) have to be evaluated according to the condition at time t0:

where δu,u0 is the Kroenecker’s delta. To evaluate the information content of this model, we also need:

In Figure S14 we show that, in this model, U does not display habituation. Rather, it increases upon repeated stimuli, acting as the storage in the main text. On the other hand, the probability of the receptor being active does habituate. This suggests that habituation can only occur in fast variables modulated by slow variables.

It is straightforward to intuitively understand why a direct feedback from U, with this population undergoing a slow dynamics, cannot lead to habituation. Indeed, at a fixed distribution of the external signal, the stationary solution of 〈U⟩ i already takes into account the effect of the negative feedback. Hence, if the system starts with a very low readout population (no signal), the dynamics induced by a switching signal can only bring 〈U⟩ to its steady state with intervals in which the population will grow and intervals in which it decreases. Naively speaking, the dynamics of 〈U⟩ i becomes similar to the one of the storage in the complete model, since it is actually playing the same role of storing information in this simplified context.

Dynamics of a system where U evolves on the same timescale of H, and implements directly a negative feedback on the receptor. In this model, 〈U⟩ (in red) increases upon repeated stimulation rather than decreasing, responding to changes in 〈H⟩ (in gray) as the storage of the full model. On the other hand, the probability of the receptor being active, pR(r = 1) (black), shows signs of habituation.

S6. Experimental Setup

Acquisitions of the zebrafish brain activity were carried out in Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28° C on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). Larvae were embedded in 2 percent agarose gel and their brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Briefly, the imaging path is based on an 8-kHz galvo-resonant commercial 2P design (Bergamo I Series, Thorlabs, Newton, NJ, United States) coupled to a Ti:Sapphire source (Chameleon Ultra II, Coherent) tuned to 920nm for imaging GCaMP6 signals and modulated by a Pockels cell(Conoptics). The fluorescence collection path includes a 705 nm long-pass main dichroic and a 495nm long-pass dichroic mirror transmitting the fluorescence light toward a GaAsP PMT detector (H7422PA-40, Hamamatsu) equipped with EM525/50 emission filter. Data were acquired at 30 frames per second, using a water dipping Nikon CFI75 LWD 16X W objective covering an effective field of view of about 450 × 900μm with a resolution of 512 × 1024 pixels. The volumetric module is based on an electrically tunable lens (Optotune) moving continuously according to a saw-tooth waveform synchronized with the frame acquisition trigger. An entire volume of about 180 − 200um in thickness encompassing 30 planes separated by about 7um is acquired at a rate of 1 volume per second, sufficient to track the relative slow dynamics associated with the fluorescence-based activity reporter GCaMP6s.

As for the visual stimulation, looming stimuli were generated using Stytra and presented monocularly on a 50×50mm screen using a DPL4500 projector by Texas Instruments. The dark looming dot was presented 10 times with 150s interval, centered with the fish eye and with a l/v parameter of 8.3 s, reaching at the end of the stimulation a visual angle of 79.4° corresponding to an angular expansion rate of 9.5°/s. The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a rectangular window 3 second long, and automatic segmentation using Suite2P. Then, the obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 52,000 cells with a linear regression model (scikit-learn Python Library) using a set of base functions representing the expected responses to each of the stimulation events, obtained convolving an exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterized by an amplitude different from zero only during the presentation of the corresponding visual stimulus. The resulting coefficients were divided for the mean squared error of the fit to obtain a set of scores. The cells, whose score fell within the top 5 of the distribution, were considered for the dimensionality reduction analysis.

The resulting fluorescence signals F(i), for i = 1, … , Ncells, were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [77]. Briefly, for each time point t, we selected a window [tτ2,t] and evaluated the minimum smoothed fluorescence,

Then, the relative change in fluorescence signal,

is smoothed with an exponential moving average. Thus, the neural activity profile for the i-th cell that we use in the main text is given by

In accordance with the previous literature [77], we set τ0 = 0.2 s, τ1 = 0.75 s, and τ2 = 3 s. The qualitative nature of the low-dimensional activity in the PCA space is not altered by other sensible choices of these parameters.

Acknowledgements

G.N., S.S., and D.M.B. acknowledge Amos Maritan for fruitful discussions. D.M.B. thanks Paolo De Los Rios for insightful comments. G.N. and D.M.B. acknowledge the Max Planck Institute for the Physics of Complex Systems for hosting G.N. during several stages of this work. G.N. acknowledges funding provided by the Swiss National Science Foundation through its Grant CRSII5_186422. D.M.B. is funded by the STARS@UNIPD grant with the project “ActiveInfo”.