Sensing, adaptation, and habituation mechanisms in biological systems span a wide range of temporal and spatial scales, from cellular to multi-cellular level, forming the basis for decision-making and the optimization of limited resources [18]. Prominent examples include the modulation of flagellar motion operated by bacteria according to changes in the local nutrient concentration [911], the regulation of immune responses through feedback mechanisms [12, 13], the progressive reduction of neural activity in response to repeated looming stimulation [14, 15], and the maintenance of high sensitivity in varying environments for olfactory or visual sensing in mammalian neurons [1620].

In the last decade, advances in experimental techniques fostered the quest for the core biochemical mechanisms governing information processing. Simultaneous recordings of hundreds of biological signals made it possible to infer distinctive features directly from data [2124]. However, many of these approaches fall short of describing the connection between the underlying chemical processes and the observed behaviors [2528]. To fill this gap, several works focused on the architecture of specific signaling networks, from tumor necrosis factor [12, 13] to chemotaxis [9, 29], highlighting the essential structural ingredients for their efficient functioning. An observation shared by most of these studies is the key role of a negative feedback mechanism to induce emergent adaptive responses [3033]. Moreover, any information-processing system, biological or not, must obey information-thermodynamic laws that prescribe the necessity of a storage mechanism [34]. This is an unavoidable feature of numerous chemical signaling networks [9, 30] and biochemical realizations of Maxwell Demons [35, 36]. The storage of information consumes energy during processing [37, 38], and thus general sensing mechanisms have to take place out-ofequilibrium [3, 3941]. Recently, the discovery of memory molecules [4244] hinted at the implementation of storing mechanisms directly at the molecular scale. Overall, negative feedback, storage, and out-of-equilibrium conditions seem to be necessary requirements for a system to process environmental information and act accordingly. To quantify the performance of a biological informationprocessing system, theoretical developments made substantial progress in highlighting thermodynamics limitations and advantages [16, 45, 46], making a step towards linking information and dissipation from a molecular perspective [35, 47, 48].

Here, we consider an archetypal model for sensing that encapsulates all these key ingredients, i.e., negative feedback, storage, and energy dissipation, and study its response to repeated stimuli. Indeed, in the presence of dynamic environments, it is common for a biological system to keep encountering the same stimulus. Under these conditions, a progressive decay in the amplitude of the response is often observed, both at sensory and molecular levels. In general terms, such adaptive behavior is usually named habituation and it is a common phenomenon, from biochemical concentrations [4951] to populations of neurons [14, 15, 52, 53]. In particular, habituation characterizes many neuronal circuits along the sensory-motor processing pathways in most living organisms, either invertebrates or vertebrates [52, 53]. While it has been proposed that inhibitory feedback mechanisms modulate the stimulus weight [15, 54], there are different hypotheses about the actual functional role of habituation in regulating the information flow, optimal processing, and sensitivity calibration [55], and controlling behavior and prediction during complex tasks [5658]. Despite its ubiquity, the onset of habituation from general microscopic models remains unexplored, along with its functional advantages in terms of information gain and energy dissipation.

In this work, we tackle these questions. Our architecture is inspired by those found in real biological systems operating at different scales [12, 16] and resembles the topologies of minimal networks exhibiting adaptive features in different contexts [49, 50, 59]. By deriving the exact solution of this prototypical model, we identify that the key mechanism driving habituation is the negative feedback provided by slow information storage. We find that the information gain over time peaks at intermediate levels of habituation, uncovering that optimal processing performances are not necessarily tangled with maximal activity reduction. This optimal region can be retrieved by simultaneously minimizing dissipation and maximizing information, hinting at an a priori optimal region of operation for biological systems. Our results open the avenue to understanding the emergence of habituation, along with its information-theoretic advantage.

Results

Archetypal model for sensing in biological systems

We describe a model with three fundamental units: a receptor (R), a readout population (U), and a storage population (S) (Figure 1a). The presence of these three distinct components is a feature shared by several topologies exhibiting adaptive responses of various kinds [29, 49, 50, 59]. The role of the receptor is to sense external inputs, which we represent as a time-varying environment (H) described by the probability distribution pH(h, t). The receptor can be either active (r = 1) or passive (r = 0), with the two states separated by an energetic barrier, ΔE. A strong external signal favors activation of the receptor, while inhibition takes place through a negative feedback process mediated by the concentration of the storage, [S]. The negative feedback acts to reduce the level of activity of the system, and its effect on the receptor resembles known motifs found in biochemical systems (see Figure 1b-e) [12, 16]. We model the activation of the receptor by the environmental signal through a “sensing pathway” (superscript H). Instead, the inhibition mechanism affects an “internal pathway” of reactions (superscript I). By assuming that the rates follow an effective Arrhenius’ law, we end up with:

where sets the timescale of the receptor. For simplicity, the driving induced by inhibition appearing in depends linearly on the concentration of S at a given time through a proportionality constant κ. Here, the inverse temperature β encodes the thermal noise, as lower values of β are associated with faster reactions (see Methods for a detailed discussion on model parameters). Crucially, the presence of two different transition pathways, motivated by molecular considerations and pivotal in many energy-consuming biochemical systems [35, 60, 61], creates an internal non-equilibrium cycle in receptor dynamics.

Whenever active, the receptor drives the production of the readout population U, which represents the direct response of the system to environmental signals. As such, we consider it to be the observable that characterizes habituation. It may describe, for example, photo-receptors or calcium concentration for olfactory or visual sensing mechanisms [14, 15, 1720, 55]. We model its dynamics with a stochastic birth-and-death process:

where u denotes the number of molecules, sets the timescale of readout production, and V is the energy needed to produce a readout unit. When the receptor is active, r = 1, this effective energetic cost is reduced by c by an effective additional driving. Thus, active receptors transduce the environmental energy into an active pumping on the readout node, allowing readout units to encode information on the external signal.

Finally, readout units stimulate the production of the storage population S. Its number of molecules s follows a controlled birth-and-death process [6264]:

where σ is the energetic cost of a storage unit and sets the timescale, i.e., . For simplicity, we set a first-order catalytic form for Γss+1 (u) and allow for a maximum number of storage units, NS, so that [S] = s/NS. The storage may represent different molecular mechanisms at a coarse-grained level as, for example, memory molecules sensitive to calcium activity [42], synaptic depotentiation, and neural populations that regulate neuronal response [14, 15]. Storage units, as we will see, are responsible for encoding the readout response and play the role of a finite-time memory.

Our model, being devoid of specific biological details, can be declined to describe systems at very different scales (Figure 1b-d). We do not expect any detailed biochemical implementation to qualitatively change our results. However, we expect from previous studies [64] that the presence of multiple timescales in the system will be fundamental in shaping information between the different components. Thus, we employ the biologically plausible assumption that U undergoes the fastest evolution, while S and H are the slowest degrees of freedom [29, 65]. We have that τuτRτSτH, where TH is the timescale of the environment.

Sketch of the model architecture and biological examples at different scales. (a) A receptor R transitions between an active (A) and passive (P) state along two pathways, one used for sensing (red) and affected by the environment h, and the other (blue) modified by the storage concentration, [S]. An active receptor increases the response of a readout population U (orange), which in turn stimulates the production of storage units S (green) that provide negative feedback to the receptor. (b) In tumor necrosis factor (TNF) signaling, we can identify a similar architecture. The nuclear factor NF 𢈒 κB is produced after receptor binding to TNF. NFκB modulates the encoding of the zinc-finger protein A20, which closes the feedback loop by inhibiting the receptor complex. (c) Similarly, in olfactory sensing, odorant binding induces the activation of adenylyl cyclase (AC). AC stimulates a calcium flux, eventually producing phosphorylase calmodulin kinase II (CAMKII) which phosphorylates and deactivates AC. (d) In neural response, multiple mechanisms take place at different scales. In zebrafish larvae, visual stimulation is pro jected along the visual stream from the retina to the cortex, a coarse-grained realization of the R-U dynamics. Neural habituation emerges upon repeated stimulation, as measured by calcium fluorescence signals (dF/F0) and by the corresponding 2-dimensional PCA of the activity profiles.

The onset of habituation and its functional role

Habituation occurs when the response of the system, represented by the number of active readout units, decreases upon repeated stimulation. In our architecture, we expect it to emerge due to the increase in the storage population, which in turn provides an increasing negative feedback to the receptor. To study the onset and the features of habituation, we consider a switching exponential signal, pH (h,t) ~ exp [−h/ 〈H〉 (t)]. The time-dependent average 〈H〉 periodically switches between two values, 〈H〉min and 〈H〉max, corresponding to a vanishing signal and strong stimulation of the receptor, respectively. Overall, the system dynamics is governed by four different operators, ŴX, with X = R, U, S, H, one for each population and one for the environment. The resulting master equation is:

where P denotes, in general, the joint propagator P (u, r, s, h, t|u0, r0, s0, h0, t0), with u0, r0, s0 and h0 initial conditions at time t0. By taking advantage of the timescale separation, we can write an exact self-consistent solution to Eq. (4) at all times t (see Methods and Supplementary Information).

We assume that 〈H〉 switches to 〈Hmax at equally spaced intervals t1 , … , tN, each with the same duration ΔT. After a large number of inputs, the system reaches a time-periodic steady-state (see Fig. 2d-e). Thus, habituation is quantified by the change in the average response of the system:

where t1 is the time of the first signal, and t is the time of a signal at the steady state. Whenever Δ 〈U〉 < 0, the system is habituating to the external inputs. In Figure 2a, we study habituation as a function of the inverse temperature δ and the energetic cost of storage, σ (see Methods). As expected, habituation is stronger at small σ, where a large storage production provides a strong negative feedback to the receptor, sharply decreasing 〈U〉.

During its dynamical evolution, the system encodes information on the environment H. We are particularly interested in how much information is captured by the readout population, which is measured by the mutual information between U and H at time t (see Methods):

where H[p](t) is the Shannon entropy of the probability distribution p, and pU|H denotes the conditional probability distribution of U given H. IU,H quantifies the system performance in terms of the information that the readout population captures on the external input at each time. Furthermore, it coincides with the entropy increase of the readout distribution:

Evolution of the model under a switching external field H(t). (a) The change in the average readout population Δ 〈U〉 between the first signal and after a large number of signals, as a function of the inverse temperature β and the energetic cost of storage σ. Δ 〈U〉 quantifies the habituation strength. (b) The change in the mutual information between the readout population and the external field, ΔIU,H. A region with maximal information gain corresponds to intermediate habituation strength. (c) The change in the feedback information ΔIf indicates that, close to the region of maximal information gain, the storage favors information. (d-e) In the region of maximal information gain, the average number of readout units, 〈U〉, decreases with the number of repetitions, while the average storage concentration, 〈S〉, increases. At large times, the system reaches a periodic steady state. (f-g) In the same region, the information encoded on H through the readout, IU,H, increases in time during habituation, boosting in turn the feedback information, ΔIf. (h) The internal dissipation rate due to the production of U and S, , decreases in time. Model parameters for panels (d-h) are β = 3, σ = 0.6 (in the unit measure of energy, for simplicity), and as specified in the Methods.

In Figure 2b, we show how the corresponding information gain ΔIU,H, defined in analogy to Eq. (5), changes with β and σ. We find a region where the information gain is maximal. Surprisingly, this region corresponds to intermediate values of Δ 〈U〉, suggesting that strong habituation driven by a low energetic cost of storage is ultimately detrimental to the system.

We can understand this feature by introducing the feedback information

which quantifies how much the simultaneous knowledge of U and S increases IU,H with respect to knowing solely U. We find that, during repeated external stimulation, the change in feedback information ΔΔIf, again in analogy to Eq. (5), may be negative (Figure 2c). This indicates that the negative feedback on the receptor is impeding the information-theoretic performances of the system, independently of the habituation strength. Crucially, ΔΔIf sharply increases in the region of maximal information gain, hinting that, at intermediate values of habituation, the information gain in the readout is driven by the storage mechanism. For the sake of simplicity, and to emphasize the information-theoretic advantage, we refer to this region of maximal information gain and intermediate habituation as the “onset” of habituation.

In Figure 2(d-g), we show the evolution of the system for values of (β, σ) that lie in the region of maximal information gain. The readout activity decreases in time, modulating the response of the system to the repeated input (Figure 2d). This behavior resembles recent observations on habituation under analogous external conditions in various experimental systems [14, 15, 4951]. We emphasize that the readout population is the fastest species at play, hence each point of the trajectory 〈U〉 (t) corresponds to a steady-state solution. As expected, the reduction of 〈U〉 is a direct consequence of the increase of the average storage population, 〈S〉 (Figure 2e). In this region, both the increase of IU,H and ΔIf over time during habituation are optimal (Figure 2f). This behavior may seem surprising, since the increase in IU,H is concomitant to a reduction of the population that is encoding the signal. However, let us note that the mean of U is not directly related to the factorizability of the joint distribution pU,H, i.e., to how much information on the signal is encoded in the readout. Furthermore, the inhibitory effect provided by S is enhanced by repeated stimuli, generating a stronger dependency between H and U over time.

Optimality at the onset of habituation. (a-b) Contour plots in the (β, σ) plane of the stationary mutual information and the receptor dissipation per unit temperature, , in the presence of a constant external input. (c) For a given value of β, the system can optimize σ to the Pareto front (black line) to simultaneously minimize δQR and maximize IU,H. Each point in this space corresponds to a different strategy γ. If γ = 0, the system minimizes dissipation only, and if γ = 1 it only maximizes information. Below the front, the system exploits the available energy suboptimally, reaching lower values of information. In contrast, the region above the front is physically inaccessible. (d-e) In the presence of a dynamical input, the parameters defining the optimal front capture the region of maximal information gain and thus correspond to the onset of habituation, where 〈U〉 starts to be significantly smaller than zero. (f) Projection of ΔIU,H and Δ 〈U〉 along σ for a range of values of β ∈ [3 − 3.5]. The gray area enclosed by the dashed vertical lines indicates the location of the Pareto front for these values. ΔIU,H clearly peaks at optimality, while Δ 〈U〉 takes intermediate values.

The increase of IU,H comes along with another intriguing result. Since during habituation, the concentrations of the internal populations U and S change in time, we can quantify how much energy is required to support these processes. The rate of dissipation into the environment due to these internal mechanisms is (see Methods):

We refer to as the internal dissipation of the system and, in Figure 2h, we show that it decreases over time, hinting at a synergistic thermodynamic advantage the onset of habituation.

Maximal information gain from an optimization principle

We now investigate whether the region of maximal information gain can be retrieved by means of an a priori optimization principle. To do so, we focus on the case of a constant environment. In this scenario, the system can tune its internal parameters to optimally respond to the statistics of an external input during a prolonged stimulation, i.e., the system “learns” the parameters while measuring an input with a large (infinite) observation time. Thus, the input statistics is given by .

When the system reaches its steady state, the information that the readout has on the signal, , is estimated from the joint probability (Figure 3a). At the same time, however, the system is consuming energy to maintain the receptor active. The receptor dissipation per unit temperature is given by

as we show in Figure 3b. Large values of the mutual information compatible with minimal dissipation in the receptor can be obtained by maximizing the Pareto functional [66]:

where γ ∈ [0,1] sets the strategy implemented by the system. If γ ≪ 1, the system prioritizes minimizing dissipation, whereas if γ ≈ 1 it acts to preferentially maximize information. The set of (β, σ) that maximize Eq. (10) defines a Pareto optimal front in the (δQR, IU,H) space (Figure 3c). At fixed receptor dissipation, this front represents the maximum information between the readout and the external input that can be achieved. The region below the front is therefore suboptimal. Instead, the points ab ove the front are inaccessible, as higher values of IU,H cannot be attained without increasing δQR. We note that, since β usually cannot be directly controlled by the system, the Pareto front indicates the optimal a to which the system tunes at fixed β (see Methods and Supplementary Information for details).

Along this optimal front, we find that the system displays habituation (see Figure 3d). Furthermore, when plotted in the (β, σ) plane in the presence of a switching dynamical input, the front qualitatively corresponds to the region of maximal information gain and the onset of habituation, as we see in Figure 3e. Remarkably, this implies that once the system tunes its internal parameters to respond to a constant signal to maximize information and minimize dissipation, it also learns to respond optimally to the time-varying input in terms of information gain. In Figure 3f, we show that at fixed β, the Pareto front (gray area) represents the region around the peak of ΔIU,H, where Δ 〈U〉 attains intermediate values. In other words, the onset of habituation emerges spontaneously when the system attempts to activate its receptor as little as possible, while retaining information about the external environment.

The role of information storage

The presence of a storage mechanism is fundamental in our model. Furthermore, its role in mediating the negative feedback is suggested by several experimental and theoretical observations [9, 2933]. Crucially, whenever the storage is eliminated from our model, habituation cannot take place, highlighting its key role in driving the observed dynamics.

In Figure 4a, we show that habituation and the change in the storage, Δ 〈S〉, are deeply related to one another. The more <S> relaxes between two consecutive signals, the less the readout population reduces its activity. This ascribes to the storage population the role of an effective memory and highlights its dynamical importance for habituation. Moreover, the dependence of the storage dynamics on the interval between consecutive signals, ΔT, influences information gain as well. Indeed, increasing ΔT, we observe a decrease of the mutual information (Figure 4b) on the next stimulus, and the system needs to produce a larger number of readout units upon the new input. In the Supplementary Information, we further analyze the impact of different signal and pause durations.

The role of memory in shaping habituation. (a) The system response depends on the waiting time ΔTpause between two external signals. As ΔTpause increases, the storage decays and thus memory is lost (green), and consequently the habituation of the readout population decreases (yellow). (b) As a consequence, the information IU,H that the system has on the field H when the new stimulus arrives decays as well. Model parameters for this figure are β = 2.5, σ = 0.5 in the unit measure of the energy, and as specified in the Methods.

We remark here that the proposed model is fully Markovian in its microscopic components, and the memory that governs readout habituation spontaneously emerges from the interplay among the internal timescales. In particular, recent works have highlighted that the storage needs to evolve on a slower timescale, comparable to that of the external input, in order to generate information in the receptor and in the readout [64]. In our model, instantaneous negative feedback implemented directly by U (bypassing the storage mechanism) leads to no time-dependent modulations (see Supplementary Information). Conversely, a readout population evolving on a timescale comparable to that of the signal cannot effectively mediate the negative feedback on the receptor since its population increase (as for the storage in the complete model) would not lead to habituation (see Supplementary Information). Thus, negative feedback has to be implemented by a separate degree of freedom evolving on a slow timescale.

Minimal features of neural habituation

In neural systems, habituation is typically measured as a progressive reduction of the stimulus-driven neuronal firing rate [14, 15, 52, 53, 55]. To test whether our minimal model can be used to capture the typical neural habituation dynamics, we measured the response of zebrafish larvae to repeated looming stimulations via volumetric multiphoton imaging [67]. From a whole-brain recording of ≈ 55000 neurons, we extracted a subpopulation of ≈ 2400 neurons in the optic tectum with a temporal activity profile that is most correlated with the stimulation protocol (see Methods).

Our model can be extended to qualitatively reproduce some features of the progressive decrease in neuronal response amplitudes. We identify each single readout with a subpopulation of binary neurons. A fraction of neurons are randomly turned on each time a readout unit is activated (see Methods). We tune the model parameters to have a comparable number of total active neurons at the first stimulus with respect to the experimental setting. Moreover, we set the pause and stimulus durations in line with the typical timescales of the looming stimulation. We choose the model parameters β and σ in such a way that the system operates close to the peak of information gain and the activity decrease over time is comparable to the activity decrease in experimental data (see Supplementary Information). In this way, we can focus on the effects of storage and feedback mechanisms without modeling further biological details. The patterns of the model-generated activity are remarkably similar to the experimental ones (see Figure 5a). A 2dimensional embedding of the data via PCA (explained variance ≈ 70%) reveals that the evoked neural response is described by the first principal direction, while habituation is reflected in the second (Figure 5b). Remarkably, as we see in Figure 5c, this is the case in our minimal model as well, although the neural response is replaced by the switching on/off dynamics of the input.

Discussion

In this work, we considered a minimal architecture that serves as a microscopic and archetypal description of sensing processes across biological scales. Informed by theoretical and experimental observations, our model includes three fundamental mechanisms: a receptor, a readout population, and a storage mechanism that drives negative feedback. We have shown that our model ro-bustly displays habituation under repeated external inputs, a widespread phenomenon in both biochemical and neural systems. We find a regime of parameters of maximal information gain, where habituation drives an increase in the mutual information between external input and the system’s response. Remarkably, the system can spontaneously tune to this region of parameters if it enforces an information-dissipation trade-off. In particular, optimal systems lie at the onset of habituation, characterized by intermediate levels of activity reduction, as both too-strong and too-weak negative feedback are detrimental to information gain. Our results suggest that the functional advantages of the onset of habituation are rooted in the interplay between energy dissipation and information gain.

Habituation in zebrafish larvae. (a) Normalized neural activity profile in a zebrafish larva in response to the repeated presentation of visual stimulation, and comparison with the fraction of active neurons 〈Nact〉 = 〈Nact〉/N in our model with stochastic neural activation (see Methods). Stimuli are indicated with colored dots from blue to red as time increases. (b) PCA of experimental data reveals that habituation is captured mostly by the second principal direction, while features of the evoked neural response by the first one. Different colors indicate responses to different stimuli. (c) PCA of simulated neural activations. Although we cannot capture the dynamics of the evoked neural response with a switching input, the core features of habituation are correctly captured along the second principal direction. Model parameters are β = 4.5, σ = 0.15 in energy units, and as in the Methods, so that the system is tuned to the onset of habituation.

Although minimal, our model can capture basic features of neural habituation, where it is generally accepted that inhibitory feedback mechanisms modulate the stimulus weight [54]. Remarkably, recent works reported the existence of a separate inhibitory neuronal population whose activity increases during habituation [15]. Our model suggests that this population might play the role of a storage mechanism, allowing the system to habituate to repeated signals. However, in neural systems, a prominent role in encoding both short- and long-term information is also played by synaptic plasticity [68, 69] as well as by memory molecules [4244], at a biochemical level. A comprehensive analysis of how information is encoded and retrieved will most likely require all these mechanisms at once. Including an explicit connectivity structure with synaptic updates in our model may help in this direction, at the price of analytical tractability. Further, recent experiments also showed that by increasing the pause between two consecutive stimuli, the readout starts responding again, as theoretically predicted by our model [15]. Importantly, our framework allows us to formulate quantitative predictions of the system’s response to subsequent stimulation. In particular, the increase in pause durations will decrease the habituation strength, until a typical time at which habituation should disappear. Comparison with experiments by modulating the frequency and intensity of stimulation will help identify the model parameters characterizing the system under investigation and, as such, its information-theoretic performance. Overall, these results hint at the fact that our minimal architecture may lay the foundation of habituation dynamics across vastly different biological scales.

Extensions of these ideas are manifold. Other a priori optimization principles for the system should be considered, focusing in particular on more detailed and realistic molecular schemes. Upon these premises, the possibility of inferring the underlying biochemical structure from observed behaviors is a fascinating direction [49]. Furthermore, since we focused on repetitions of statistically identical signals, it will be fundamental to characterize the system’s response to diverse environments [70]. To this end, incorporating multiple receptors or storage populations may be needed to harvest information in complex conditions. In such scenarios, correlations between external signals may help reduce the encoding effort as, intuitively, S is acting as an information reservoir for the system. Moreover, such stored information might be used to make predictions on future stimuli and behavior, even if the detailed biological implementation of this complex task is still to be explored [5658]. Indeed, living systems do not passively read external signals but often act upon the environment. Both storage mechanisms and their associated negative feedback will remain core modeling ingredients, paving the way to understanding how this encoded information guides learning, predictions, and decision-making, a paramount question in many fields.

Our work serves as a fundamental framework for these ideas. On the one hand, it encapsulates key ingredients to support habituation while still being minimal enough to allow for analytical treatment. On the other hand, it may help the experimental quest for signatures of these physical ingredients in a variety of systems. Ultimately, our results show how habituation - a ubiquitous phenomenon taking place at strikingly different biological scales - may stem from an information-based advantage, shedding light on the optimization principle underlying its emergence and relevance for any biological system.

Acknowledgements

G.N., S.S., and D.M.B. acknowledge Amos Maritan for fruitful discussions. D.M.B. thanks Paolo De Los Rios for insightful comments. G.N. and D.M.B. acknowledge the Max Planck Institute for the Physics of Complex Systems for hosting G.N. during the initial stage of this work.

Methods

Model parameters.

The system is driven out of equilibrium by both the external field and the storage inhibition through the receptor dynamics, whose dissipation per unit temperature is δQR. The energetic barrier (V − cr) fixes the average values of the readout population both in the passive and active state, namely 〈UP = e−βV and 〈UA = e−β(V−c) (see Eq. (2)), and κ controls the effectiveness of the storage in inhibiting the receptor’s activation. We assume that, on average, the activation rate due to the field is balanced by the feedback of a fraction α = 〈S〉/NS of the storage population,

so that we only need to fix α. Moreover, ΔE = 1, 〈UA = 150, 〈UP = 0.5, NS = 25, and α = 2/3. We remark that the emerging features of the model are independent of the specific choice of these parameters. They affect the number of active units at each time step, but all the results presented here on the information gain during habituation remain valid. Furthermore, we typically consider the average of the exponentially distributed signal to be 〈Hmax = 10 and 〈Hmin = 0.1 (see Supplementary Information for details).

Overall, we are left with β and σ as free parameters. β quantifies the amount of thermal noise in the system, and at small β the thermal activation of the receptor hinders the effect of the field and makes the system almost unable to process information. Conversely, if β is high, the system must overcome large thermal inertia, increasing the dissipative cost. In this regime of weak thermal noise, we expect that, given a sufficient amount of energy, the system can effectively process information.

Timescale separation.

We solve our system in a timescale separation framework [64, 71, 72], where the storage evolves on a timescale that is much slower than all the other internal ones, i.e.,

The fact that τS is the slowest timescale at play is crucial to making these components act as an information reservoir. This assumption is also compatible with biological examples. The main difficulty arises from the presence of the feedback, i.e. the field influences the receptor and thus the readout population, which in turn impacts the storage population and finally changes the deactivation rate of the receptor - schematically, HRUSR, but the causal order does not reflect the temporal one.

We start with the master equation for the propagator P(u, r, s, h, t|u0, r0, s0, h0, t0),

We rescale the time by τS and introduce two small parameters to control the timescale separation analysis, ε = τU/τR and δ = τR/τH. Since τS/τH = O(1), we set it to 1 without loss of generality. We then write P = P(0) + εP(1) and expand the master equation to find , with . We obtain that Π obeys the following equation:

Yet again, Π = Π(0) + δΠ(1) allows us to write Π(0) = at order O(δ−1), where . Expanding first in ε and then in δ sets a hierarchy among timescales. Crucially, due to the feedback present in the system we cannot solve the next order explicitly to find F. Indeed, after a marginalization overr, we find F, at order O(1), where . Hence, the evolution operator for F depends manifestly on s, and the equation cannot be self-consistently solved. To tackle the problem, we first discretize time, considering a small interval, i.e., t = t0 + Δt with ΔtτU and thus . We thus find F(s, h, t|s0, h0, t0) = P(s,t|s0, t0)PH(h, t| h0, t0) in the domain t ε [t0, t0 + Δt], since H evolves independently from the system (see also Supplementary Information for analytical steps).

Iterating the procedure for multiple time steps, we end up with a recursive equation for the joint probability pU, R, S, H (u, r, s, h, t0 + Δt). We are interested in the following marginalization

where P(s,t → s,t + Δt) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the timescale separation approximation. Notice that this solution requires the knowledge of pU,S at the previous time-step and it has to be solved iteratively.

Explicit solution for the storage propagator.

To find a numerical solution to our system, we first need to compute the propagator P(s0, t0 → s,t). Formally, we have to solve the master equation

where we used the shorthand notation P(s0s) = (s0, t0s,t). Since our formula has to be iterated for small time-steps, i.e., tt0 = Δt ≪ 1, we can write the propagator as follows

where wv and λv are respectively eigenvectors and eigenvalues of the transition matrix ŴS(u0),

and the coeffiecients a(ν) are such that .

Since eigenvalues and eigenvectors of ŴS (u0) might be computationally expensive to find, we employ another simplification. As Δt → 0, we can restrict the matrix only to jumps to the n-th nearest neighbors of the initial state (s0, t0), assuming that all other states are left unchanged in small time intervals. We take n = 2 and check that the accuracy of this approximation against the full simulation for a limited number of time-steps.

Mean-field relationship.

We note that 〈U〉 and 〈S〉 satisfies the following mean-field relationship:

where f0(x) is an analytical function of its argument (see Supplementary Information). Eq. (S1) clearly states that only the fraction of active storage units is relevant to determining the habituation dynamics.

Mutual information.

Once we have pU (u, t) (obtained marginalizing pU,S over s) for a given pH (h, t), we can compute the mutual information

where H is the Shannon entropy. For the sake of simplicity, we consider that the external field follows an exponential distribution pH(h, t) = λ(t)e−λ(t)h. Notice that, in order to determine such quantity, we need the conditional probability pU|H (u, t). In the Supplementary Information, we show how all the necessary joint and conditional probability distributions can be computed from the dynamical evolution derived above.

We also highlight here that the timescale separation implies IS, H = 0, since

Although it may seem surprising, this is a direct consequence of the fact that S is only influenced by H through the stationary state of U. Crucially, the presence of the feedback is still fundamental to promote habituation. Indeed, we can always write the mutual information between the field H and both the readout U and the storage S together as I(U,S), H = ΔIf + IU, H, where ΔIf = I(U,S) ,HIU, H = I(U, H), SIU, S. Since ΔIf > 0 (by standard information-theoretic inequalities), the storage is increasing the information of the two populations together on the external field. Overall, although S and H are independent in this limit, the feedback is paramount in shaping how the system responds to the external field and stores information about it.

Dissipation of internal processes

The production of readout, u, and storage, s, molecules requires energy. From the modeling of their dynamics as controlled stochastic birth-and-death processes, we quantify the dissipation into the environment using the environmental contribution of the Schnakenberg entropy production, which is also the only one that survives at stationarity [73]. We have:

where we indicated all possible dependencies in the joint probability distribution. By employing the timescale separation [71], and noting that Γu→u±1 do not depend on s, we finally have:

As this quantity decreases during habituation, the system tends to dissipate less and less into the environment to produce the internal populations that are required to encode and store the external signal.

Pareto optimization.

We perform a Pareto optimization at stationarity in the presence of a prolonged stimulation. We seek the optimal values of (β, σ) by maximizing the functional

where γ ∈ [0,1]. Hence, we maximize the information between the readout and the field, simultaneously minimizing the dissipation of the receptor induced by both the signal and feedback process, as discussed in the main text. The values are normalized since, in principle, they can span different orders of magnitudes. In the Supplementary Information, we detailed the derivation of the Pareto front and show that qualitatively similar results can be obtained for a 3-d Pareto-like surface obtained by maximizing also the feedback information, ΔIf.

Recording of whole brain neuronal activity in zebrafish larvae.

Acquisitions of the zebrafish brain activity were carried out in one Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28°C on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). The subject was embedded in 2 percent agarose gel and brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Data were acquired at 30 frames per second covering an effective field of view of about 450 × 900um with a resolution of 512 × 1024 pixels. The volumetric module acquires a volume of about 180 − 200um in thickness encompassing 30 planes separated by about 7um, at a rate of 1 volume per second, sufficient to track the slow dynamics associated with the fluorescence-based activity reporter GCaMP6s. Visual stimulation was presented in the form of a looming stimulus with 150s intervals, centered with the fish eye (see Supplementary Information). Neurons identification and anatomical registrations were performed as described in [67].

Data analysis.

The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a 3s rectangular window, and automatic segmentation. The obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 55000 cells with a linear regression model using a set of base functions representing the expected responses to each stimulation event. These base functions have been obtained by convolving the exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterizing the presentation of the corresponding visual stimulus. The resulting score coefficients of the fit were used to extract the cells whose score fell within the top 5% of the distribution, resulting in a population of ≈ 2400 neurons whose temporal activity profile correlates most with the stimulation protocol. The resulting fluorescence signals F(i) were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [74]. See Supplementary Information.

Model for neural activity.

Here, we describe how our framework is modified to mimic neural activity. Each readout unit, u, is interpreted as a population of N neurons, i.e., a region dedicated to the sensing of a specific input. Storage can be implemented, for example, as an inhibitory neural population, in line with recent observations in [15]. When a readout population is activated at time t, each of its N neurons fires with a probability p. We set N = 20 and p = 0.5. N has been set to have the same number of observed neurons in data and simulations, while p only controls the dispersal of the points in Fig. 5c, thus not altering the main message. The dynamics of each readout unit follows our dynamical model. Due to habituation, some of the readout units activated by the first stimulus will not be activated by subsequent stimuli. Although the evoked neural response cannot be captured by this extremely simple model, its archetypal ingredients (dissipation, storage, and feedback) are informative enough to reproduce the low-dimensional habituation dynamics found in experimental data.

S1. Detailed Solution of the Master Equation

Consider the transition rates introduced in the main text:

We set a reflective boundary for the storage at s = NS, corresponding to the maximum amount of storage molecules in the system. Moreover, for the sake of simplicity, we take . Retracing the steps of the Methods, the Master Equation governing the evolution of the propagator of all variables, P(u, r, s, h, t|u0, r0, s0, h0, t0), is:

We solve this equation employing a timescale separation, i.e., τUτRτS ~ τH, where for X = U, R, S and τH is the typical timescale of the signal dynamics. Motivated by several biological examples, we assumed that the readout population undergoes the fastest dynamics, while storage and signal evolution are the slowest ones. Defining ε = τU/τR and δ = τR/τH, and setting τS/τH = 1 without loss of generality, we have:

We propose a solution in the following form, P = P(0) + εP(1). By inserting this expression in the equation above, and solving order by order in ε, at order ε−1, we have that:

where pst solves the master equation for the readout evolution at a fixed r:

with α(r) = e−β(V −cr). Hence,

At order ε0, we find the equation for Π, also reported in the Methods:

To solve this equation, we propose a solution of the form Π = Π(0) + δΠ(1). Hence, again, at order δ−1, we have that , where satisfy the steady-state equation for the fastest degree of freedom, with all the others fixed. In the case, it is just the solution of the rate equation for the receptor:

where , and the same for the reverse reaction. At order δ−1, we have an equation for F:

As already explained in the Methods, due to the feedback, this equation cannot be solved explicitly. Indeed, the operator governing the evolution of F is:

with and using the linearity of ŴS(u). In order to solve this equation, we shall assume that ū(s,h) = u0, bearing in mind that this approximation holds if t is small enough, i.e., t = t0 + Δt with Δtτu. Therefore, for a small interval, we have:

Overall, we end up with the following joint probability of the model at time t0 + Δt:

where ∫ dh0PH(h, t0 + Δt|h0, t0)pU, S, H (u0, s0, h0, t0) = pU, S(u0, s0, t0)pH(h, t0 + Δt) since H at time t0 + Δt is independent of S and U. When propagating the evolution through intervals of duration Δt, we also assume that H evolves independently since it is an external variable, while affecting the evolution of the other degrees of freedom. This structure reflects into the equation above. For simplicity, we prescribe pH (h, t) to be an exponential distribution, pH(h, t) = λ(t)e−λ(t)h, and solve iteratively Eq. (S12) from t0 to a given T in steps of duration Δt, as indicated above. This complex iterative solution arises from the timescale separation because of the cyclic feedback structure: {S, H} → RUS. This solution corresponds explicitly to

where P(s, ts, t + Δt) is the propagator of the storage at fixed readout. This is the Chapman-Kolmogorov equation in the time-scale separation approximation. Notice that this solution requires the knowledge of pU, S at the previous time-step and it has to be solved iteratively. Both pU and pS can be obtained by an immediate marginalization.

As detailed in the Methods, the propagator P(s0, t0s, t), when restricted to small time intervals, can be obtained by solving the birth-and-death process for storage molecules at fixed readout, limiting the state space only to n nearest neighbors (we checked that our results are robust increasing n for the selected simulation time step).

S2. Information-Theoretic Quantities

By direct marginalization of Eq. (S13), we obtain the evolution of pU (u, t) and pS (s, t) for a given pH (h, t). Hence, we can compute the mutual information as follows:

where H[pX] is the Shannon entropy of X, and Δ𝕊U is the reduction in the entropy of U due to repeated measurement (see main text). Notice that, in order to determine such quantity, we need the conditional probability pU|H(u, t). This distribution represent the probability that, at a given time, the system jumps at a value u in the presence of a given field h. In order to compute it, we can write

by definition. The only dependence on h enters in through the eβh dependence in the rates.

Analogously, all the other mutual information can be obtained. Notably, as we showed in the Methods, IS, H = 0, as a consequence of the time-scale separation. Crucially, the presence of the feedback is still fundamental to effectively process information about the signal. This effect can be quantified through ΔIf = I(U,S),HIU,H > 0, which we name feedback information, as it captures how much the knowledge of S and U together helps encoding information about the signal with respect to U alone. In terms of system entropy, we equivalently have:

that highlights how much the effect of S (feedback) reduces the entropy of the system due to repeated measurements.

Practically speaking, in order to evaluate I(U,S),H, we exploit the following equality:

for which we need pU,S|H, that can be found noting that

from which we immediately see that

which we can easily computed at any given time t.

S3. Mean-Field Relation Between Average Readout and Storage

Fixing all model parameters, the average value of storage, 〈S〉, and readout, 〈U〉, is numerically determined by solving iteratively the system, as shown above. However, an analytical relation between these two quantities can be found starting from the definition of 〈U〉:

Then, inserting the expression for the stationary probability that we know analytically:

where has a complicated expression involving the hypergeometric function 2F1 in terms of model parameters and only the concentration of S, ρS = s/NS (the explicit derivation of this formula is not shown here). Then, we have:

Since we do not have an analytical expression for , we employ the mean-field approximation, reducing all the correlation functions to products of averages:

where . This clearly shows that, given a set of model parameters, 〈U〉 and the average concentration of storage, are related. In particular, introducing the change of parameters presented in the Methods, we have the following collapse:

where 〈UA and 〈UP are respectively the average of U fixing r = 1 (active receptor) and r = 0 (passive receptor). It is also possible to perform an expansion of f0 which numerically results to be very precise:

where . Since all these relations just depend on the average concentration of the storage, it is natural to ask what happens when . Fixing all the remaining parameters, both 〈U〉 and will change, still satisfying the mutual relation presented above. Let us consider, for , the stationary solution that has the same concentration of S, i.e., . As a consequence of the scaling relation, . Considering 〈UP ≈ 0 in both settings, we can ask ourselves what is the factor γ such that . Since u only enters linearly in the dynamics of the storage, and the mutual relation only depends on the concentration of S, we guess that γ = 1/n, as numerically observed. As stated in the main text, we can finally conclude that the storage concentration is the most relevant quantity in our model to determine the effect of the feedback and characterize the dynamical evolution. This observation makes our conclusions more robust, as they do not depend on the specific choice for the storage reservoir since there always exists a scaling relation connecting 〈U〉 and . As such, changing the value of the model parameters we fixed, will only affect the number of active molecules without modifying the main results presented in this work.

S4. The Necessity of Storage

Here, we discuss in detail the necessity of slow storage implementing the negative feedback to have habituation. We will first investigate the possibility that negative feedback, necessary for any kind of habituative behaviors, is implemented directly through the readout population that undergoes a fast dynamics. We will analytically show that this limit leads to the absence of habituation, hinting at the necessity of having a slow dynamical feedback in the system (Sec. S4 1). Then, we will study the system in the scenario in which U applies the feedback, bypassing the storage S, but it acts as a slow variable. Solving the Master Equation through our iterative numerical method, we show that, also in this case, habituation disappears (Sec. S4 2). These results suggest that not only the feedback must be applied by a slow variable, but that such a slow variable must have a role different from the readout population, in line with recent observations in neural systems [15]. The model proposed in the main text is indeed minimal in this respect, other than compatible with biological examples.

1. Dynamical feedback cannot be implemented by a fast readout

If the storage is directly implemented by the readout population, the transition rates get modified as follows:

At this level, θ is a free parameter playing the same role as κ/Ns in the complete model with the storage. We start again from the master equation for the propagator P (u, r, h, t|u0, r0, h0, t0):

where τUτRτH, since we are assuming, as before, that U is the fastest variable. Here, ε = τU/τR and δ = τR/τH. Notice that now ŴR depends also on u. We can solve the system again by resorting to a timescale separation and scaling the time by the slowest timescale, τH. We have:

We now expand the propagator at first order in ε, P = P(0) + εP(1). Then, the order ε−1 of the master equation gives, as above, . At order ε0, Eq. (S28) leads to

To solve this, we expand the propagator as ∏ = ∏(0) + δ(1) and, at order δ−1, we obtain:

This is a 2 × 2 effective matrix acting on ∏(0), where the only rate affected by u is , which multiplies the active states, i.e., r = 1. This equation can be analytically computed and the solution of Eq. (S30) is:

with log(Θ) = eβ(V −c) (eβθ − 1). Clearly, does not depend on u since we summed over the fast variable. Going on with the computation, at order δ0, we obtain:

So that the full propagator results to be:

From this expression, we can find the joint probability distribution, following the same steps as before:

As expected, since U relaxes instantaneously, the feedback is instantaneous as well. As a consequence, the timedependent behavior of the system is solely driven by the external field H, with a fixed amplitude that takes into account the effect of the feedback only on average. This means that there will be no dynamic reduction of activity and, as such, no habituation in this scenario. This was somehow expected, since all variables are faster than the external field and, as a consequence, the feedback cannot be implemented over time. The first conclusion is that the variable implementing the feedback has to evolve together with H.

2. Effective dynamical feedback requires an additional population

We now assume that the feedback is, again, implemented by U, but it acts as a slow variable. Formally, we take τRτUτH. Rescaling the time by the slowest timescale, τH (works the same for τU), we have:

with ε = τR/τH. We now expand the propagator at first order in ε, P = P(0) + εP(1). Then, the order ε−1 of the master equation is simply ŴRP(0) = 0, whose solution gives . At order ε0:

The only dependence on r in ŴU(r) is through the production rate of U. Indeed, the effective transition matrix governing the birth-and-death process of readout molecules is characterized by:

This rate depends only on h, but h evolves in time. Therefore, we should scan all possible (infinite) values that h takes and build an infinite dimensional transition matrix. In order to solve the system, imagine that we are looking at the interval [t0, t0 + Δt]. Then, we can employ the following approximation if ΔtτH:

Using this simplification, we need to solve the following equation:

The explicit solution in the interval t ε [t0, t0 + Δt] can be found to be:

with a propagator. The full propagator at time t0 + Δt is then:

Integrating over the initial conditions, we finally obtain:

To numerically integrate this equation, we make two approximations. The first one is that we solve the dynamics in all intervals in which the field does not evolve, where PH is a delta function peaked at the initial condition. For all time points in which the field changes, this amounts to considering the field at the previous instant, a good approximation as long ΔtτH, particularly when the time dependence of the field is a square wave, as in our case.

The second approximation is to compute the propagator of PU. As explained in the Methods of the main text, we restrict our computation to the transitions between n nearest neighbors in the U space. In the case of transitions only among next-nearest neighbors, we have the following dynamics:

with the transition matrix:

the diagonal is fixed to satisfy the conservation of normalization, as usual. The solution is:

where wν and λν are respectively eigenvectors and eigenvalues of the transition matrix Wnn. The coefficients a(ν) have to be evaluated according to the condition at time t0:

where δu, u0 is the Kroenecker’s delta. To evaluate the information content of this model, we also need:

In Figure S6 we show that, in this model, U does not display habituation. Rather, it increases upon repeated stimuli, acting as the storage in the main text. On the other hand, the probability of the receptor being active does habituate. This suggests that habituation can only occur in fast variables modulated by slow variables.

Dynamics of a system where U evolves on the same timescale of H, and implements directly a negative feedback on the receptor. In this model, 〈U〉 (in red) increases upon repeated stimulation rather than decreasing, responding to changes in 〈H〉 (in gray) as the storage of the full model. On the other hand, the probability of the receptor being active, pR(r = 1) (black), shows signs of habituation.

It is straightforward to intuitively understand why a direct feedback from U, with this population undergoing a slow dynamics, cannot lead to habituation. Indeed, at a fixed distribution of the external signal, the stationary solution of 〈U〉 already takes into account the effect of the negative feedback. Hence, if the system starts with a very low readout population (no signal), the dynamics induced by a switching signal can only bring 〈U〉 to its steady state with intervals in which the population will grow and intervals in which it decreases. Naively speaking, the dynamics of 〈U〉 becomes similar to the one of the storage in the complete model, since it is actually playing the same role of storing information in this simplified context.

S5. Robustness of Optimality

1. Effects of the external signal strength and thermal noise level

In the main text, for analytical ease, we take the environment to be an exponentially distributed signal,

where λ is its inverse characteristic scale. In particular, we describe the case in which no signal is present by setting λ to be large, so that the typical realizations of H would be too small to activate the receptors. On the other hand, when λ is small, the values of h appearing in the rates of the model are large enough to activate the receptor and thus allow the system to sense the signal.

In the dynamical case, we take λ(t) to be a square wave, so that 〈H〉 = 1/λ alternates between two values 〈Hmin and 〈Hmax. We denote with Ton the duration of 〈Hmax, and with Toff the one of 〈Hmin. In practice, this signal mimics an on-off dynamics, where the stochastic signal is present only when its average is large enough, 〈Hmax. In the main text, we take 〈Umin = 0.1 and 〈Hmax = 10, with Ton = Toff = 100Δt.

In Figure S7a, we study the behavior of the model in the presence of a static exponential signal, with average 〈H〉. We focus on the case of low σ, so that the production of storage is favored. As 〈H〉 decreases, IU, H decreases as well. Hence, as expected, information acquired through sensing depends on the strength of the external signal that coincides with the energy input driving receptor activation. However, the system does not display for all parameters an emergent information dynamics, memory, and habituation. In Figure S7b, we see that, when the temperature is low but σ is high, the system does not show habituation and ΔIU, H = 0. On the other hand, when thermal noise dominates (Figure S7c), even when the external signal is small, the system produces a large readout population due to random thermal activation. As a consequence, these random activation hinders the signal-driven ones, thus the system do not effectively sense the external signal even when present and IU, H is always small. It is important to remind here that, as we see in the main text in Figure 3b, at fixed σ and as a function of ς, IU, H is not monotonic. This is due to the fact that low temperatures typically favor sensing and habituation, but they also intrinsically suppress readout production. Thus, at high β, σ needs to be small to effectively store information since thermal noise is negligible. Vice versa, a small σ is detrimental at high temperatures since the system produces storage as a consequence of thermal noise. This complex interplay is captured by the Pareto optimization, which gives us an effective relation between β and σ to maximize storage while minimizing dissipation.

Effects of the external signal strength and thermal noise level on sensing. (a) At fixed and low σ = 0.1 and constant exponentially distributed signal with mean 〈H〉. As 〈H〉 decreases, the system captures less information and it needs to operate at lower temperatures to sense the signal. In particular, as the temperature decreases, IU,H becomes larger. (b) In the dynamical case, outside the optimal surface, at high β and high σ, storage is not produced and thus no negative feedback is present. The system does not display habituation, and IU, H is smaller than inside the optimal surface (gray area). (c) In the opposite regime, at low β and σ, the system is dominated by thermal noise. As a consequence, the average readout 〈U〉 is high even when the external signal is not present (〈H〉 = 〈Hmin = 0.1), and it captures only a small amount of information IU, H, which is masked by thermal activation. Other simulation parameters for this figure are 〈UA = 150, 〈UP = 0.5, β = 2/3, and = ΔE = g. For the dynamical case, Ton = Toff = 100Δt.

2. 3-dimensional Pareto-like surface

In line with the front derived above, it is possible to maximize both IU, H and ΔIf, while minimizing δQR to study the region of parameters where habituation spontaneously emerges. In this case, the idea is to maximize all the features associated with the capability to process information, while maintaining a minimal dissipation. Since, as expected, IU, H and ΔIf are not in trade-off (see Figure S8), the resulting optimal area will be named Pareto-like surface. In Figure S8a, we represent it in the three-dimensional features space. Figure S8b-d only represents the pro jection of this area (in gray) into the parameters space, (β, σ). In what follows, we study the robustness of the optimality of our model by showing both the 2-dimensional Pareto front (see main text) and the 3-dimensional Pareto-like surface in the (β, σ) space. In fact, it is immediate to see from what we show below that the two only slightly differ, and considering one or the other does not qualitatively change the results and the conclusions of our work.

Trade-off between energy dissipation and information with a 3D optimization including information feedback. (a) The optimal surface in the (IU, H, ΔIf, −δQR) space with a constant external field, is obtained through a Pareto-like optimization. (b-d) The values of σ and β inside the optimal surface (gray surface) maximize both the readout information, IU, H, and the information feedback of the storage population, ΔIf, while minimizing the dissipation of the receptor, δQR. (e) At the optimal (β, σ) (gray area, shown at a fixed value of σ « 1.5) the average readout, 〈U〉, and storage, 〈S〉, are at intermediate values.

3. Static and dynamical optimality

In Figure S9, we plot the average readout population, 〈U〉, the average storage population, 〈S〉, the mutual information between them, IU, S, and the entropy production of the internal processes, , as a function of β and σ and in the presence of a static field. The optimal values of β and σ obtained by minimizing the Pareto-like functional

are such that both 〈U〉 and 〈S〉 attain intermediate values. We also show the 2-dimensional Pareto front derived in the main text for comparison (dashed black line). Thus, large/small production of readout/storage is detrimental to sensing. Similar considerations can be drawn for the dissipation of internal processes, . Interestingly, the dependence between readout and storage, quantified by the mutual information IU, S, is not maximized at optimality. This suggests that excessively strong negative feedback impedes information, while promoting dissipation in the receptor, δQR, thus being suboptimal.

Behavior of the average readout population, 〈U〉, the average storage population, 〈S〉, the mutual information between them, IU, S, and the entropy production of the internal processes, , as a function of β and σ and in the presence of a static field. The gray area represents the Pareto-like optimal surface, while the dashed black line indicates the 2-dimensional Pareto front derived in the main text. The signal is exponentially distributed with an inverse characteristic scale λ = 0.1, so that 〈H〉 = 10. Other simulation parameters are as in Figure S7.

In Figure S10, we study the dynamical behavior of the model under a repeated external signal, as in Figure 3f-g-h in the main text. In particular, given an observable O, we define its change under a repeated signal ΔO as the difference between the maximal response to the signal at large times and the maximal response to the first signal (Figure S10a). In Figure S10b we see in particular that Δ〈U〉 is maximal in the region where the change in information feedback ΔΔIf is negative, suggesting that a strong habituation fueled by a large storage concentration is ultimately detrimental for information processing. Furthermore, in this region the entropy produced by internal processes, , is maximal.

Dynamical optimality under a repeated external signal. (a) Schematic definition of how we study the dynamical evolution of relevant observables, by comparing the maximal response to a first signal with the one to a signal at large times. (b) Behavior of the increase in readout information, ΔIU, H, in feedback information, ΔΔIf, in average storage population, Δ〈U〉, and in entropy production, Δ. The gray area represents the Pareto-like optimal surface in the presence of a static field, while the dashed black line indicates the 2-dimensional Pareto front obtained in the same conditions. Simulations parameters are as in Figure S7. In particular, recall the signal is exponentially distributed whose characteristic scale follows a square wave, with 〈Hmax = 10, 〈Hmax = 0.1, and Ton = Toff = 100Δt.

4. Interplay between information storage and signal duration

In the main text and insofar, we have always considered the case Ton = Toff. We now study the effect of the signal duration and the pause length on sensing (Figure S11). If the system only receives short signals between long pauses, the slow storage build-up does not reach a high level of concentration. As a consequence, the negative feedback on the receptor is less effective and habituation is suppressed (Figure S11a). Therefore, the peak of ΔIU, H in the (β, σ) plane takes place below the optimal surface, as σ needs to be smaller than in the static case to boost storage production during the brief periods in which the signal is present. On the other hand, in Figure S11b we consider the case of a long signal with short pauses. In this scenario, the slow dynamical evolution of the storage can reach large concentrations at larger values of σ, thus moving the optimal dynamical region slightly above the Pareto-like surface. Considering the 2-dimensional Pareto front as a reference, it does not change qualitatively our observations. The case of a short signal is comparable to the durations of the looming stimulations in the experimental setting (see next Section), which can be used to tune the parameters of the model to the peak of information gain.

Effect of the signal duration on habituation. (a) If the system only receives the signal for a short time (Ton = 50ΔT < Toff = 200ΔT) it does not have enough time to reach a high level of storage concentration. As a consequence, both ΔU and ΔIU, H are smaller, and thus habituation is less effective. (b) If the system receives long signals with brief pauses (Ton = 200ΔT > Toff = 50ΔT), instead, the habituation mechanism promotes information storage and thus a reduction in the readout activity. Other simulation parameters are as in Figure S10. Gray area and dashed black line represent the 3dimensional and 2dimensional optimal region where habituation emerges respectively.

S6. Experimental Setup

Acquisitions of the zebrafish brain activity were carried out in Elavl3:H2BGCaMP6s larvae at 5 days post fertilization raised at 28°C on a 12 h light/12 h dark cycle according to the approval by the Ethical Committee of the University of Padua (61/2020 dal Maschio). Larvae were embedded in 2 percent agarose gel and their brain activity was recorded using a multiphoton system with a custom 3D volumetric acquisition module. Briefly, the imaging path is based on an 8-kHz galvo-resonant commercial 2P design (Bergamo I Series, Thorlabs, Newton, NJ, United States) coupled to a Ti:Sapphire source (Chameleon Ultra II, Coherent) tuned to 920nm for imaging GCaMP6 signals and modulated by a Pockels cell(Conoptics). The fluorescence collection path includes a 705 nm long-pass main dichroic and a 495nm long-pass dichroic mirror transmitting the fluorescence light toward a GaAsP PMT detector (H7422PA-40, Hamamatsu) equipped with EM525/50 emission filter. Data were acquired at 30 frames per second, using a water dipping Nikon CFI75 LWD 16X W objective covering an effective field of view of about 450 × 900um with a resolution of 512 × 1024 pixels. The volumetric module is based on an electrically tunable lens (Optotune) moving continuously according to a saw-tooth waveform synchronized with the frame acquisition trigger. An entire volume of about 180 − 200um in thickness encompassing 30 planes separated by about 7um is acquired at a rate of 1 volume per second, sufficient to track the relative slow dynamics associated with the fluorescence-based activity reporter GCaMP6s.

As for the visual stimulation, looming stimuli were generated using Stytra and presented monocularly on a 50 × 50mm screen using a DPL4500 projector by Texas Instruments. The dark looming dot was presented 10 times with 150s interval, centered with the fish eye and with a l/v parameter of 8.3 s, reaching at the end of the stimulation a visual angle of 79.4° corresponding to an angular expansion rate of 9.5°/s. The acquired temporal series were first processed using an automatic pipeline, including motion artifact correction, temporal filtering with a rectangular window 3 second long, and automatic segmentation using Suite2P. Then, the obtained dataset was manually curated to resolve segmentation errors or to integrate cells not detected automatically. We fit the activity profiles of about 52,000 cells with a linear regression model (scikit-learn Python Library) using a set of base functions representing the expected responses to each of the stimulation events, obtained convolving an exponentially decaying kernel of the GCaMP signal lifetime with square waveforms characterized by an amplitude different from zero only during the presentation of the corresponding visual stimulus. The resulting coefficients were divided for the mean squared error of the fit to obtain a set of scores. The cells, whose score fell within the top 5 of the distribution, were considered for the dimensionality reduction analysis.

The resulting fluorescence signals F(i), for i = 1, … , Ncells, were processed by removing a moving baseline to account for baseline drifting and fast oscillatory noise [74]. Briefly, for each time point t, we selected a window [tτ2, t] and evaluated the minimum smoothed fluorescence,

Then, the relative change in fluorescence signal,

is smoothed with an exponential moving average. Thus, the neural activity profile for the i-th cell that we use in the main text is given by

In accordance with the previous literature [74], we set τ0 = 0.2 s, τ1 = 0.75 s, and τ2 = 3 s. The qualitative nature of the low-dimensional activity in the PCA space is not altered by other sensible choices of these parameters.