Adaptation of olfactory receptor abundances for efficient coding

  1. Tiberiu Teşileanu  Is a corresponding author
  2. Simona Cocco
  3. Rémi Monasson
  4. Vijay Balasubramanian
  1. Flatiron Institute, United States
  2. City University of New York, United States
  3. University of Pennsylvania, United States
  4. École Normale Supérieure and CNRS UMR 8550, PSL Research, UPMC Sorbonne Université, France

Abstract

Olfactory receptor usage is highly heterogeneous, with some receptor types being orders of magnitude more abundant than others. We propose an explanation for this striking fact: the receptor distribution is tuned to maximally represent information about the olfactory environment in a regime of efficient coding that is sensitive to the global context of correlated sensor responses. This model predicts that in mammals, where olfactory sensory neurons are replaced regularly, receptor abundances should continuously adapt to odor statistics. Experimentally, increased exposure to odorants leads variously, but reproducibly, to increased, decreased, or unchanged abundances of different activated receptors. We demonstrate that this diversity of effects is required for efficient coding when sensors are broadly correlated, and provide an algorithm for predicting which olfactory receptors should increase or decrease in abundance following specific environmental changes. Finally, we give simple dynamical rules for neural birth and death processes that might underlie this adaptation.

https://doi.org/10.7554/eLife.39279.001

eLife digest

A mouse’s nose contains over 10 million receptor neurons divided into about 1,000 different types, which detect airborne chemicals – called odorants – that make up smells. Each odorant activates many different receptor types. And each receptor type responds to many different odorants. To identify a smell, the brain must therefore consider the overall pattern of activation across all receptor types. Individual receptor neurons in the mammalian nose live for about 30 days, before new cells replace them. The entire population of odorant receptor neurons turns over every few weeks, even in adults.

Studies have shown that some types of these receptor neurons are used more often than others, depending on the species, and are therefore much more abundant. Moreover, the usage patterns of different receptor types can also change when individual animals are exposed to different smells. Teşileanu et al. set out to develop a computer model that can explain these observations.

The results revealed that the nose adjusts its odorant receptor neurons to provide the brain with as much information as possible about typical smells in the environment. Because each smell consists of multiple odorants, each odorant is more likely to occur alongside certain others. For example, the odorants that make up the scent of a flower are more likely to occur together than alongside the odorants in diesel. The nose takes advantage of these relationships by adjusting the abundance of the receptor types in line with them. Teşileanu et al. show that exposure to odorants leads to reproducible increases or decreases in different receptor types, depending on what would provide the brain with most information.

The number of odorant receptor neurons in the human nose decreases with time. The current findings could help scientists understand how these changes affect our sense of smell as we age. This will require collaboration between experimental and theoretical scientists to measure the odors typical of our environments, and work out how our odorant receptor neurons detect them.

https://doi.org/10.7554/eLife.39279.002

Introduction

The sensory periphery acts as a gateway between the outside world and the brain, shaping what an organism can learn about its environment. This gateway has a limited capacity (Barlow, 1961), restricting the amount of information that can be extracted to support behavior. On the other hand, signals in the natural world typically contain many correlations that limit the unique information that is actually present in different signals. The efficient-coding hypothesis, a key normative theory of neural circuit organization, puts these two facts together, suggesting that the brain mitigates the issue of limited sensory capacity by eliminating redundancies implicit in the correlated structure of natural stimuli (Barlow, 1961; van Hateren, 1992a). This idea has led to elegant explanations of functional and circuit structure in the early visual and auditory systems (see, e.g. Laughlin, 1981; Atick and Redlich, 1990; Van Hateren, 1993; Olshausen and Field, 1996; Simoncelli and Olshausen, 2001; Fairhall et al., 2001; Lewicki, 2002; Ratliff et al., 2010; Garrigan et al., 2010; Tkacik et al., 2010; Hermundstad et al., 2014; Palmer et al., 2015; Salisbury and Palmer, 2016). These classic studies lacked a way to test causality by predicting how changes in the environment lead to adaptive changes in circuit composition or architecture. We propose that the olfactory system provides an avenue for such a causal test because receptor neuron populations in the mammalian nasal epithelium are regularly replaced, leading to the possibility that their abundances might adapt efficiently to the statistics of the environment.

The olfactory epithelium in mammals and the antennae in insects are populated by large numbers of olfactory sensory neurons (OSNs), each of which expresses a single kind of olfactory receptor. Each type of receptor binds to many different odorants, and each odorant activates many different receptors, leading to a complex encoding of olfactory scenes (Malnic et al., 1999). Olfactory receptors form the largest known gene family in mammalian genomes, with hundreds to thousands of members, owing perhaps to the importance that olfaction has for an animal’s fitness (Buck and Axel, 1991; Tan et al., 2015; Chess et al., 1994). Independently evolved large olfactory receptor families can also be found in insects (Missbach et al., 2014). Surprisingly, although animals possess diverse repertoires of olfactory receptors, their expression is actually highly non-uniform, with some receptors occurring much more commonly than others (Rospars and Chambille, 1989; Ibarra-Soria et al., 2017). In addition, in mammals, the olfactory epithelium experiences neural degeneration and neurogenesis, resulting in replacement of the OSNs every few weeks (Graziadei and Graziadei, 1979). The distribution of receptors resulting from this replacement has been found to have a mysterious dependence on olfactory experience (Schwob et al., 1992; Santoro and Dulac, 2012; Zhao et al., 2013; Dias and Ressler, 2014; Cadiou et al., 2014; Ibarra-Soria et al., 2017): increased exposure to specific ligands leads reproducibly to more receptors of some types, and no change or fewer receptors of other types.

Here, we show that these puzzling observations are predicted if the receptor distribution in the olfactory epithelium is organized to present a maximally informative picture of the odor environment. Specifically, we propose a model for the quantitative distribution of olfactory sensory neurons by receptor type. The model predicts that in a noisy odor environment: (a) the distribution of receptor types will be highly non-uniform, but reproducible given fixed receptor affinities and odor statistics; and (b) an adapting receptor neuron repertoire should reproducibly reflect changes in the olfactory environment; in a sense it should become what it smells. Precisely such findings are reported in experiments (Schwob et al., 1992; Santoro and Dulac, 2012; Zhao et al., 2013; Dias and Ressler, 2014; Cadiou et al., 2014; Ibarra-Soria et al., 2017).

In contrast to previous work applying efficient-coding ideas to the olfactory system (Keller and Vosshall, 2007; McBride et al., 2014; Zwicker et al., 2016; Krishnamurthy et al., 2017), here we take the receptor–odorant affinities to be fixed quantities and do not attempt to explain their distribution or their evolution and diversity across species. Instead, we focus on the complementary question of the optimal way in which the olfactory system can use the available receptor genes. This allows us to focus on phenomena that occur on faster timescales, such as the reorganization of the receptor repertoire as a result of neurogenesis in the mammalian epithelium.

Because of the combinatorial nature of the olfactory code (Malnic et al., 1999; Stopfer et al., 2003; Stevens, 2015; Zhang and Sharpee, 2016; Zwicker et al., 2016; Krishnamurthy et al., 2017) receptor neuron responses are highly correlated. In the absence of such correlations, efficient coding predicts that output power will be equalized across all channels if transmission limitations dominate (Srinivasan et al., 1982; Olshausen and Field, 1996; Hermundstad et al., 2014), or that most resources will be devoted to receptors whose responses are most variable if input noise dominates (van Hateren, 1992a; Hermundstad et al., 2014). Here, we show that the optimal solution is very different when the system of sensors is highly correlated: the adaptive change in the abundance of a particular receptor type depends critically on the global context of the correlated responses of all the receptor types in the population—we refer to this as context-dependent adaptation.

Correlations between the responses of olfactory receptor neurons are inevitable not only because the same odorant binds to many different receptors, but also because odors in the environment are typically composed of many different molecules, leading to correlations between the concentrations with which these odorants are encountered. Furthermore, there is no way for neural circuitry to remove these correlations in the sensory epithelium because the candidate lateral inhibition occurs downstream, in the olfactory bulb. As a result of these constraints, for an adapting receptor neuron population, our model predicts that increased activation of a given receptor type may lead to more, fewer or unchanged numbers of the receptor, but that this apparently sporadic effect will actually be reproducible between replicates. This counter-intuitive prediction matches experimental observations (Santoro and Dulac, 2012; Zhao et al., 2013; Cadiou et al., 2014; Ibarra-Soria et al., 2017).

Olfactory response model

In vertebrates, axons from olfactory neurons converge in the olfactory bulb on compact structures called glomeruli, where they form synapses with dendrites of downstream neurons (Hildebrand and Shepherd, 1997); see Figure 1a. To good approximation, each glomerulus receives axons from only one type of OSN, and all OSNs expressing the same receptor type converge onto a small number of glomeruli, on average about two in mice to about 16 in humans (Maresh et al., 2008). Similar architectures can be found in insects (Vosshall et al., 2000).

Sketch of the olfactory periphery as described in our model.

(a) Sketch of olfactory anatomy in vertebrates. The architecture is similar in insects, with the OSNs and the glomeruli located in the antennae and antennal lobes, respectively. Different receptor types are represented by different colors in the diagram. Glomerular responses (bar plot on top right) result from mixtures of odorants in the environment (bar plot on bottom left). The response noise, shown by black error bars, depends on the number of receptor neurons of each type, illustrated in the figure by the size of the corresponding glomerulus. Glomeruli receiving input from a small number of OSNs have higher variability due to receptor noise (e.g., OSN, glomerulus, and activity bar in green), while those receiving input from many OSNs have smaller variability. Response magnitudes depend also on the odorants present in the medium and the affinity profile of the receptors. (b) We approximate glomerular responses using a linear model based on a ‘sensing matrix’ S, perturbed by Gaussian noise ηaKa are the numbers of OSNs of each type.

https://doi.org/10.7554/eLife.39279.003

The anatomy shows that in insects and vertebrates, olfactory information passed to the brain can be summarized by activity in the glomeruli. We treat this activity in a firing-rate approximation, which allows us to use available receptor affinity data (Hallem and Carlson, 2006; Saito et al., 2009). This approximation neglects individual spike times, which can contain important information for odor discrimination in mammals and insects (Resulaj and Rinberg, 2015; DasGupta and Waddell, 2008; Wehr and Laurent, 1996; Huston et al., 2015). Given data relating spike timing and odor exposure for different odorants and receptors, we could use the time from respiratory onset to the first elicited spike in each receptor as an indicator of activity in our model. Alternatively, we could use both the timing and the firing rate information together. Such data is not yet available for large panels of odors and receptors, and so we leave the inclusion of timing effects for future work.

A challenge specific to the study of the olfactory system as compared to other senses is the limited knowledge we have of the space of odors. It is difficult to identify common features shared by odorants that activate a given receptor type (Rossiter, 1996; Malnic et al., 1999), while attempts at defining a notion of distance in olfactory space have had only partial success (Snitz et al., 2013), as have attempts to find reduced-dimensionality representations of odor space (Zarzo and Stanton, 2006; Koulakov et al., 2011). In this work, we simply model the olfactory environment as a vector 𝐜={c1,,cN} of concentrations, where ci is the concentration of odorant i in the environment (Figure 1a). We note, however, that the formalism we describe here is equally applicable for other parameterizations of odor space: the components ci of the environment vector 𝐜 could, for instance, indicate concentrations of entire classes of molecules clustered based on common chemical traits, or they might be abstract coordinates in a low-dimensional representation of olfactory space.

Once a parameterization for the odor environment is chosen, we model the statistics of natural scenes by the joint probability distribution P(c1,,cN). We are neglecting temporal correlations in olfactory cues because we are focusing on odor identity rather than olfactory search where timing of cues will be especially important. This simplifies our model, and also reduces the number of olfactory scene parameters needed as inputs. Similar static approximations of natural images have been employed powerfully along with the efficient coding hypothesis to explain diverse aspects of early vision (e.g., in Laughlin, 1981; Atick and Redlich, 1990; Olshausen and Field, 1996; van Hateren and van der Schaaf, 1998; Ratliff et al., 2010; Hermundstad et al., 2014).

To construct a tractable model of the relation between natural odor statistics and olfactory receptor distributions, we describe the olfactory environment as a multivariate Gaussian with mean 𝐜0 and covariance matrix Γ,

(1) environment P(c)𝒩(c0,Γ).

This can be thought of as a maximum-entropy approximation of the true distribution of odorant concentrations, constrained by the environmental means and covariances. This simple environmental model misses some sparse structure that is typical in olfactory scenes (Yu et al., 2015; Krishnamurthy et al., 2017). Nevertheless, approximating natural distributions with Gaussians is common in the efficient-coding literature, and often captures enough detail to be predictive (van Hateren, 1992a; van Hateren, 1992b; Van Hateren, 1993; Hermundstad et al., 2014). This may be because early sensory systems in animals are able to adapt more effectively to low-order statistics which are easily represented by neurons in their mean activity and pairwise correlations.

The number N of odorants that we use to represent an environment need not be as large as the total number of possible volatile molecules. We can instead focus on only those odorants that are likely to be encountered at meaningful concentrations by the organism that we study, leading to a much smaller value for N. In practice, however, we are limited by the available receptor affinity data. Our quantitative analyses are generally based on data measured using panels of 110 odorants in fly (Hallem and Carlson, 2006) and 63 in mammals (Saito et al., 2009).

We next build a model for how the activity at the glomeruli depends on the olfactory environment. We work in an approximation in which the responses depend linearly on the concentration values:

(2) ra=KaiSaici+ηaKa,

where ra is the response of the glomerulus indexed by a, Sai is the expected response of a single sensory neuron expressing receptor type a to a unit concentration of odorant i, and Ka is the number of neurons of type a. The second term describes noise, with ηa, the noise for a single OSN, modeled as a Gaussian with mean 0 and standard deviation σa, ηa𝒩(0,σa2).

The approximation we are using can be seen as linearizing the responses of olfactory sensory neurons around an operating point. This has been shown to accurately capture the response of olfactory receptors to odor mixtures in certain concentration ranges (Singh et al., 2018). While odor concentrations in natural scenes span many orders of magnitude and are unlikely to always stay within the linear regime, the effect of the nonlinearities on the information maximization procedure that we implement below is less strong (see Appendix 3 for a comparison between our linear approximation and a nonlinear, competitive binding model in a toy example). One advantage of employing the linear approximation is that it requires a minimal set of parameters (the sensing matrix coefficients Sai), while nonlinear models in general require additional information (such as a Hill coefficient and a maximum activation for each receptor-odorant pair for a competitive binding model; see Appendix 3).

Information maximization

We quantify the information that responses, r=(r1,,rM), contain about the environment vector, c=(c1,,cN), using the mutual information I(r,c):

(3) I(r,c)=dMrdNcP(r,c)log[P(r|c)P(r)],

where P(𝐫,𝐜) is the joint probability distribution over response and concentration vectors, P(𝐫|𝐜) is the distribution of responses conditioned on the environment, and P(𝐫) is the marginal distribution of the responses alone. Given our assumptions, all these distributions are Gaussian, and the integral can be evaluated analytically (see Appendix 2). The result is

(4) I(r,c)=12Trlog(I+KΣ1Q),

where the overlap matrix Q is related to the covariance matrix Γ of odorant concentrations (from Equation (1)),

(5) Q=SΓST,

and K and Σ are diagonal matrices of OSN abundances Ka and noise variances σa2, respectively:

(6) K=diag(K1,,KM),Σ=diag(σ12,,σM2).

The overlap matrix Q is equal to the covariance matrix of OSN responses in the absence of noise (σa=0; see Appendix 2). Thus, it is a measure of the strength of the usable olfactory signal. In contrast, the quantity ΣK1 is a measure of the amount of noise in the responses, where the term K1 corresponds to the effect of averaging over OSNs of the same type. This implies that the quantity KΣ1Q is a measure of the signal-to-noise ratio (SNR) in the system (more precisely, its square), so that Equation (4) represents a generalization to multiple, correlated channels of the classical result for a single Gaussian channel, I=12log(1+SNR2) (Shannon, 1948; van Hateren, 1992a; van Hateren, 1992b). In the linear approximation that we are using, the information transmitted through the system is the same whether all OSNs with the same receptor type converge to one or multiple glomeruli (see Appendix 2). Because of this, for convenience we take all neurons of a given type to converge onto a single glomerulus (Figure 1a).

The OSN numbers Ka cannot grow without bound; they are constrained by the total number of neurons in the olfactory epithelium. Thus, to find the optimal distribution of receptor types, we maximize I(𝐫,𝐜) with respect to {Ka}, subject to the constraints that: (1) the total number of receptor neurons is fixed (aKa=Ktot); and (2) all neuron numbers are non-negative:

(7) {Ka}=argmaxKa0,aKa=KtotI(r,c).

Throughout the paper, we treat the OSN abundances Ka as real numbers instead of integers, which is a good approximation as long as they are not very small. The optimization can be performed analytically using the Karush-Kuhn-Tucker (KKT) conditions (Boyd and Vandenberghe, 2004) (see Appendix 2), but in practice it is more convenient to use numerical optimization.

Note that in contrast to other work that has used information maximization to study the olfactory system (e.g. Zwicker et al., 2016), here we optimize over the OSN numbers Ka, while keeping the affinity profiles of the receptors (given by the sensing matrix elements Sia) constant. Below we analyze how the optimal distribution of receptor types depends on receptor affinities, odor statistics, and the size of the olfactory epithelium.

Receptor diversity grows with OSN population size

Large OSN populations

In our model, receptor noise is reduced by averaging over the responses from many sensory neurons. As the number of neurons increases, Ktot, the signal-to-noise ratio (SNR) becomes very large (see Equation (2)). When this happens, the optimization with respect to OSN numbers Ka can be solved analytically (see Appendix 2), and we find that the optimal receptor distribution is given by

(8) KaKaapprox=KtotM(σa2Aaaσ2A¯),

where A is the inverse of the overlap matrix Q from Equation (5), A=Q1, σa2 are the receptor noise variances (Equation (6)), and σ2A¯=σa2Aaa/M is a constant enforcing the constraint Ka=Ktot. When Ktot is sufficiently large, the constant first term dominates, meaning that the receptor distribution is essentially uniform, with each receptor type being expressed in a roughly equal fraction of the total population of sensory neurons. In this limit, the receptor distribution is as even and as diverse as possible given the genetically encoded receptor types. The small differences in abundance are related to the diagonal elements of the inverse overlap matrix A, modulated by the noise variances σa2 (Figure 2a). The information maximum in this regime is shallow because only a change in OSN numbers of order Ktot/M can have a significant effect on the noise level for the activity of each glomerulus. Put another way, when the OSN numbers Ka are very large, the glomerular responses are effectively noiseless, and the number of receptors of each type has little effect on the reliability of the responses. This scenario applies as long as the OSN abundances Ka are much larger than the elements of the inverse overlap matrix A.

Structure of a well-adapted receptor distribution.

In panels (a–c) the receptor sensing matrix is based on Drosophila (Hallem and Carlson, 2006) and includes 24 receptors responding to 110 odorants. In panels (d–e), the total number of OSNs Ktot is fixed at 4000. In all panels, environmental odor statistics follow a random correlation matrix (see Appendix 4). Qualitative aspects are robust to variations in these choices (see Appendix 1). (a) Large OSN populations should have high receptor diversity (types represented by strips of different colors), and should use receptor types uniformly. (b) Small OSN populations should express fewer receptor types, and should use receptors non-uniformly. (c) New receptor types are expressed in a series of step transitions as the total number of neurons increases. Here, the odor environments and the receptor affinities are held fixed as the OSN population size is increased. (d) Correlation between the abundance of a given receptor type, Ka, and the logarithm of its signal-to-noise ratio in olfactory scenes, logQaa/σa2, shown here as a function of the tuning of the receptors. For every position along the x-axis, sensing matrices with a fixed receptor tuning width were generated from a random ensemble, where the tuning width indicates what fraction of all odorants elicit a strong response for the receptors (see Appendix 1). When each receptor responds strongly to only a small number of odorants, response variance is a good predictor of abundance, while this is no longer true for wide tuning. (e) Receptor abundances correlate well with the diagonal elements of the inverse overlap matrix normalized by the noise variances, σa2(Q-1)aa, for all tuning widths. In panels (d–e), the red line is the mean obtained from 24 simulations, each performed using a different sensing matrix, and the light gray area shows the interval between the 20th and 80th percentiles of results. (f) Number of intact olfactory receptor (OR) genes found in different species of mammals as a function of the area of the olfactory epithelium normalized to account for allometric scaling of neuron density ((Herculano-Houzel et al., 2015); see main text). We use this as a proxy for the number of neurons in the olfactory epithelium. Dashed line is a least-squares fit. Number of intact OR genes from (Niimura et al., 2014), olfactory surface area data from (Moulton, 1967; Pihlström et al., 2005; Gross et al., 1982; Smith et al., 2014), and weight data from (Rousseeuw and Leroy, 1987; FCI, 2018; Gross et al., 1982; Smith et al., 2014).

https://doi.org/10.7554/eLife.39279.004

Small and intermediate-sized OSN populations

When the number of neurons is very small, receptor noise can overwhelm the response to the environment. In this case, the best strategy is to focus all the available neurons on a single receptor type, thus reducing noise by summation as much as possible (Figure 2b). The receptor type that yields the most information will be the one whose response is most variable in natural scenes as compared to the amount of receptor noise; that is, the one that corresponds to the largest value of Qaa/σa2—see Appendix 2 for a derivation. This is reminiscent of a result in vision where the variance of a stimulus predicted its perceptual salience (Hermundstad et al., 2014).

As the total number of neurons increases, the added benefit of summing to lower noise for a single receptor type diminishes, and at some critical value it is more useful to populate a second receptor type that provides unique information not available in responses of the first type (Figure 2b). This process continues as the number of neurons increases, so that in an intermediate SNR range, where noise is significant but does not overwhelm the olfactory signal, our model leads to a highly non-uniform distribution of receptor types (see the trend in Figure 2b as the number of OSNs increases). Indeed, an inhomogeneous distribution of this kind is seen in mammals (Ibarra-Soria et al., 2017). Broadly, this is consistent with the idea that living systems conserve resources to the extent possible, and thus the number of OSNs (and therefore the SNR) will be selected to be in an intermediate range in which there are just enough to make all the available receptors useful.

Increasing OSN population size

Our model predicts that, all else being equal, the number of receptor types that are expressed should increase monotonically with the total number of sensory neurons, in a series of step transitions (see Figure 2c). Strictly speaking, this is a prediction that applies in a constant olfactory environment and with a fixed receptor repertoire; in terms of the parameters in our model, the total number of neurons Ktot is varied while the sensing matrix S and environmental statistics Γ stay the same. Keeping in mind that these conditions are not usually met by distinct species, we can nevertheless ask whether, broadly speaking, there is a relation between the number of functional receptor genes and the size of the olfactory epithelium in various species.

To this end, we looked at several mammals for which the number of OR genes and the size of the olfactory epithelium were measured (Figure 2f). We focused on the intact OR genes (Niimura et al., 2014), based on the expectation that receptor genes that tend to not be used are more likely to undergo deleterious mutations. We have not found many direct measurements of the number of neurons in the epithelium for different species, so we estimated this based on the area of the olfactory epithelium (Moulton, 1967; Pihlström et al., 2005; Gross et al., 1982; Smith et al., 2014). There is a known allometric scaling relation stating that the number of neurons per unit mass for a species decreases as the 0.3 power of the typical body mass (Herculano-Houzel et al., 2015). Assuming a fixed number of layers in the olfactory epithelial sheet, this implies that the number of neurons in the epithelium should scale as NOSN(epithelial area)/(body mass)230.3. We applied this relation to epithelial areas using the typical mass of several species (Rousseeuw and Leroy, 1987; FCI, 2018; Gross et al., 1982; Smith et al., 2014). The trend is consistent with expectations from our model (Figure 2f), keeping in mind uncertainties due to species differences in olfactory environments, receptor affinities, and behavior (e.g. consider marmoset vs. rat). A direct comparison is more complicated in insects, where even closely related species can vary widely in degree of specialization and thus can experience very different olfactory environments (Dekker et al., 2006). As we discuss below, our model’s detailed predictions can be more specifically tested in controlled experiments that measure the effect of a known change in odor environment on the olfactory receptor distributions of individual mammals, as in Ibarra-Soria et al. (2017).

Optimal OSN abundances are context-dependent

We can predict the optimal distribution of receptor types given the sensing matrix S and the statistics of odors by maximizing the mutual information in Equation (4) while keeping the total number of neurons Ktot=aKa constant. We tested the effect of changing the variance of a single odorant, and found that the effect on the optimal receptor abundances depends on the context of the background olfactory environment. Increased exposure to a particular ligand can lead to increased abundance of a given receptor type in one context, but to decreased abundance in another (Figure 3). In fact, patterns of this kind have been reported in recent experiments (Santoro and Dulac, 2012; Zhao et al., 2013; Cadiou et al., 2014; Ibarra-Soria et al., 2017). To understand this context-dependence better, we analyzed the predictions of our model in various signal and noise scenarios.

Comparison of changes in receptor abundances when the same perturbation is applied to two different environments.

One hundred different pairs of environments were generated, with each environment defined by a random odor covariance matrix (procedure in Appendix 4, parameter β=8). In each pair of environments (i=1,2), the variance of a randomly chosen odorant was increased (details in Appendix 4) to produce perturbed environments. For each receptor, we computed the optimal abundance before and after the perturbation (Ki and Ki) and computed the differences ΔKi=Ki-Ki. The background environments i=1,2 in each pair set the context for the adaptive change after the perturbation. We used a sensing matrix based on fly affinity data (Hallem and Carlson, 2006) (24 receptors, 110 odors) and set the total OSN number to Ktot=2000. Panel (b) zooms in on the central part of panel (a). In light blue regions, the sign of the abundance change is the same in the two contexts; light pink regions indicate opposite sign changes in the two contexts. In both figures, dark red indicates high-density regions where there are many overlapping data points.

https://doi.org/10.7554/eLife.39279.005

One factor that does not affect the optimal receptor distribution in our model is the average concentration vector 𝐜0. This is because it corresponds to odors that are always present and therefore offer no new information about the environment. This is consistent with experiment (Ibarra-Soria et al., 2017), where it was observed that chronic odor exposure does not affect receptor abundances in the epithelium. In the rest of the paper, we thus restrict our attention to the covariance matrix of odorant concentrations, Γ.

The problem of maximizing the amount of information that OSN responses convey about the odor environment simplifies considerably if these responses are weakly correlated. In this case, standard efficient coding theory says that receptors whose activities fluctuate more extensively in response to the olfactory environment provide more information to brain, while receptors that are active at a constant rate or are very noisy provide less information. In this circumstance, neurons expressing receptors with large signal-to-noise ratio (SNR, i.e. signal variance as compared to noise variance) should increase in proportion relative to neurons with low signal-to-noise ratio (see Appendix 2 for a derivation). In terms of our model, the signal variance of glomerular responses is given by diagonal elements of the overlap matrix Q (Equation 5), while the noise variance is σa2; so we expect Ka, the number of OSNs of type a, to increase with Qaa/σa2. Responses are less correlated if receptors are narrowly tuned, and we find indeed that if each receptor type responds to only a small number of odorants, the abundances of OSNs of each type correlate well with their variability in the environment (narrow-tuning side of Figure 2d). This is also consistent with the results at high SNR: we saw above that in that case KaC-σa2(Q-1)aa, and when response correlations are weak, Q is approximately diagonal, and thus (Q-1)aa1/Qaa.

The biological setting is better described in terms of widely tuned sensing matrices (Hallem and Carlson, 2006), and an intermediate SNR level in which noise is important, but does not dominate the responses of most receptors. We therefore generated sensing matrices with varying tuning width by changing the number of odorants that elicit strong activity in each receptor (as detailed in Appendix 1). We found that as receptors begin responding to a greater diversity of odorants, the correlation structure of their activity becomes important in determining the optimal receptor distribution; it is no longer sufficient to just examine the signal to noise ratios of each receptor type separately as a conventional theory suggests (wide-tuning side of Figure 2d). In other words, the optimal abundance of a receptor type depends not just on its activity level, but also on the context of the correlated activity levels of all the other receptor types. These correlations are determined by the covariance structures of the environment and of the sensing matrix.

In fact, across the range of tuning widths the optimal receptor abundances Ka are correlated with the inverse of the overlap matrix, A=Q-1 (Figure 2e). For narrow tuning widths, the overlap matrix Q is approximately diagonal (because correlations between receptors are weak) and so Q-1 is simply the matrix of the inverse diagonal elements of Q. Thus, in this limit, the correlation with Q-1 simply follows from the correlation with Q that we discussed above. As the tuning width increases keeping the total number of OSNs Ktot constant, the responses from each receptor grow stronger, increasing the SNR, even as the off-diagonal elements of the overlap matrix Q become significant. In the limit of high SNR, the analytical formula KaC-σa2Qaa-1 (Equation 8) ensures that the OSN numbers Ka are still correlated with the diagonal elements of Q-1, despite the presence of large off-diagonal components. Because of the matrix inversion in Q-1, the optimal abundance for each receptor type is affected in this case by the full covariance structure of all the responses and not just by the variance Qaa of the receptor itself. Mathematically, this is because the diagonal elements of Q-1 are functions of all the variances and covariances in the overlap matrix Q. This dependence of each abundance on the full covariance translates to a complex context-dependence whereby changing the same ligand in different background environments can lead to very different adapted distributions of receptors. In Appendix 6 we show that the correlation with the inverse overlap matrix has an intuitive interpretation: receptors which either do not fluctuate much or whose values can be guessed based on the responses of other receptors should have low abundances.

Environmental changes lead to complex patterns of OSN abundance changes

To investigate how the structure of the optimal receptor repertoire varies with the olfactory environment, we first constructed a background in which the concentrations of 110 odorants were distributed according to a Gaussian with a randomly chosen covariance matrix (e.g., Figure 4a; see Appendix 4 for details). From this base, we generated two different environments by adding a large variance to 10 odorants in environment 1, and to 10 different odorants in environment 2 (Figure 4b). We then considered the optimal distribution in these environments for a repertoire of 24 receptor types with odor affinities inferred from (Hallem and Carlson, 2006). We found that when the number of olfactory sensory neurons Ktot is large, and thus the signal-to-noise ratio is high, the change in odor statistics has little effect on the distribution of receptors (Figure 4c). This is because at high SNR, all the receptors are expressed nearly uniformly as discussed above, and this is true in any environment. When the number of neurons is smaller (or, equivalently, the signal-to-noise ratio is in a low or intermediate regime), the change in environment has a significant effect on the receptor distribution, with some receptor types becoming more abundant, others becoming less abundant, and yet others not changing much between the environments (see Figure 4d). This mimics the kinds of complex effects seen in experiments in mammals (Schwob et al., 1992; Santoro and Dulac, 2012; Zhao et al., 2013; Dias and Ressler, 2014; Cadiou et al., 2014; Ibarra-Soria et al., 2017).

Effect of changing environment on the optimal receptor distribution.

(a) An example of an environment with a random odor covariance matrix with a tunable amount of cross-correlation (details in Appendix 4). The variances are drawn from a lognormal distribution. (b) Close-ups showing some differences between the two environments used to generate results in (c and d). The two covariance matrices are obtained by adding a large variance to two different sets of 10 odorants (out of 110) in the matrix from (a). The altered odorants are identified by yellow crosses; their variances go above the color scale on the plots by a factor of more than 60. (c) Change in receptor distribution when going from environment 1 to environment 2, in conditions where the total number of receptor neurons Ktot is large (in this case, Ktot=40 000), and thus the SNR is high. The blue diamonds on the left correspond to the optimal OSN fractions per receptor type in the first environment, while the orange diamonds on the right correspond to the second environment. In this high-SNR regime, the effect of the environment is small, because in both environments the optimal receptor distribution is close to uniform. (d) When the total number of neurons Ktot is small (Ktot=100 here) and the SNR is low, changing the environment can have a dramatic effect on optimal receptor abundances, with some receptors that are almost vanishing in one setting becoming highly abundant in the other, and vice versa.

https://doi.org/10.7554/eLife.39279.006

Changing odor identities has more extreme effects on receptor distributions than changing concentrations

In the comparison above, the two environment covariance matrices differed by a large amount for a small number of odors. We next compared environments with two different randomly generated covariance matrices, each generated in the same way as the background environment in Figure 4a. The resulting covariance matrices (Figure 5a, top) are very different in detail (the correlation coefficient between their entries is close to zero; distribution of changes in Figure 5b, red line), although they look similar by eye. Despite the large change in the detailed structure of the olfactory environment, the corresponding change in optimal receptor distribution is typically small, with a small fraction of receptor types experiencing large changes in abundance (red curve in Figure 5c). The average abundance of each receptor in these simulations was about 1000, and about 90% of all the abundance change values |ΔKi| were below 20% of this, which is the range shown on the plot in Figure 5c. Larger changes also occurred, but very rarely: about 0.1% of the abundance changes were over 800.

The effect of a change in environmental statistics on the optimal receptor distribution as a function of overlap in the odor content of the two environments, and the tuning properties of the olfactory receptors.

(a) Random environment covariance matrices used in our simulations (red entries reflect positive [co-]variance; blue entries reflect negative values). The environments on the top span a similar set of odors, while those on the bottom contain largely non-overlapping sets of odors. (b) The distribution of changes in the elements of the environment covariance matrices between the two environments is wider (i.e. the changes tend to be larger) in the generic case than in the non-overlapping case shown in panel (a). The histograms in solid red and blue are obtained by pooling the 500 samples of pairs of environment matrices from each group. The plot also shows, in lighter colors, the histograms for each individual pair. (c) Probability distribution functions of changes in optimal OSN abundances in the 500 samples of either generic or non-overlapping environment pairs. These are obtained using receptor affinity data from the fly (Hallem and Carlson, 2006) with a total number of neurons Ktot=25 000. The non-overlapping scenario has an increased occurrence of both large changes in the OSN abundances, and small changes (the spike near the y-axis). The x-axis is cropped for clarity; the maximal values for the abundance changes |ΔKi| are around 1000 in both cases. (d) Effect of tuning width on the change in OSN abundances. Here two random environment matrices obtained as in the ‘generic’ case from panels (a–c) were kept fixed, while 50 random sensing matrices with 24 receptors and 110 odorants were generated. The tuning width for each receptor, measuring the fraction of odorants that produce a significant activation of that receptor (see Appendix 1), was chosen uniformly between 0.2 and 0.8. The receptors from all the 50 trials were pooled together, sorted by their tuning width, and split into three tuning bins. Each dot represents a particular receptor in the simulations, with the vertical position indicating the amount of change in abundance ΔK. The horizontal locations of the dots were randomly chosen to avoid too many overlaps; the horizontal jitter added to each point was chosen to be proportional to the probability of the observed change ΔK within its bin. This probability was determined by a kernel density estimate. The boxes show the median and interquartile range for each bin. The abundances that do not change at all (ΔK=0) are typically ones that are predicted to have zero abundance in both environments, Ki=Ki=0.

https://doi.org/10.7554/eLife.39279.007

If we instead engineer two environments that are almost non-overlapping so that each odorant is either common in environment 1, or in environment 2, but not in both (Figure 5a, bottom; see Appendix 4 for how this was done), the changes in optimal receptor abundances between environments shift away from mid-range values towards higher values (blue curve in Figure 5c). For instance, 40% of abundance changes lie in the range |ΔK|>50 in the non-overlapping case, while the proportion is 28% in the generic case.

It seems intuitive that animals that experience very different kinds of odors should have more striking differences in their receptor repertoires than those that merely experience the same odors with different frequencies. Intriguingly, however, our simulations suggest that the situation may be reversed at the very low end: the fraction of receptors for which the predicted abundance change is below 0.1, |ΔK|<0.1, is about 2% in the generic case but over 9% for non-overlapping environment pairs. Thus, changing between non-overlapping environments emphasizes the more extreme changes in receptor abundances, either the ones that are close to zero or the ones that are large. In contrast, a generic change in the environment leads to a more uniform distribution of abundance changes. Put differently, the particular way in which the environment changes, and not only the magnitude of the change, can affect the receptor distribution in unexpected ways.

The magnitude of the effect of environmental changes on the optimal olfactory receptor distribution is partly controlled by the tuning of the olfactory receptors (Figure 5d). If receptors are narrowly tuned, with each type responding to a small number of odorants, changes in the environment tend to have more drastic effects on the receptor distribution than when the receptors are broadly tuned (Figure 5d), an effect that could be experimentally tested.

Model predictions qualitatively match experiments

Our study opens the exciting possibility of a causal test of the hypothesis of efficient coding in sensory systems, where a perturbation in the odor environment could lead to predictable adaptations of the olfactory receptor distribution during the lifetime of an individual. This does not happen in insects, but it can happen in mammals, since their receptor neurons regularly undergo apoptosis and are replaced.

A recent study demonstrated reproducible changes in olfactory receptor distributions of the sort that we predict in mice (Ibarra-Soria et al., 2017). These authors raised two groups of mice in similar conditions, exposing one group to a mixture of four odorants (acetophenone, eugenol, heptanal, and R-carvone) either continuously or intermittently (by adding the mixture to their water supply). Continuous exposure to the odorants had no effect on the receptor distribution, in agreement with the predictions of our model. In contrast, intermittent exposure did lead to systematic changes (Figure 6a).

Qualitative comparison between experiment and theory.

(a) Panel reproduced from raw data in Ibarra-Soria et al. (2017), showing the log-ratio between receptor abundances in the mouse epithelium in the test environment (where four odorants were added to the water supply) and those in the control environment, plotted against values in control conditions (on a log scale). The error bars show standard deviation across six individuals. Compared to Figure 5B in Ibarra-Soria et al. (2017), this plot does not use a Bayesian estimation technique that shrinks ratios of abundances of rare receptors toward 1 (personal communication with Professor Darren Logan, June 2017). (b) A similar plot produced in our model using mouse and human receptor response curves (Saito et al., 2009). The error bars show the range of variation found in the optimal receptor distribution when slightly perturbing the two environments (see the text). The simulation includes 59 receptor types for which response curves were measured (Saito et al., 2009), compared to 1115 receptor types assayed in Ibarra-Soria et al. (2017). Our simulations used Ktot=2000 total OSNs.

https://doi.org/10.7554/eLife.39279.008

We used our model to run an experiment similar to that of Ibarra-Soria et al. (2017) in silico (Figure 6b). Using a sensing matrix based on odor response curves for mouse and human receptors (data for 59 receptors from Saito et al. (2009)), we calculated the predicted change in OSN abundances between two different environments with random covariance matrices constructed as described above. We ran the simulations 24 times, modifying the odor environments each time by adding a small amount of Gaussian random noise to the square roots of these covariance matrices to model small perturbations (details in Appendix 4; range bars in Figure 6b). The results show that the abundances of already numerous receptors do not change much, while there is more change for less numerous receptors. The frequencies of rare receptors can change dramatically, but are also more sensitive to perturbations of the environment (large range bars in Figure 6b).

These results qualitatively match experiment (Figure 6a), where we see the same pattern of the largest reproducible changes occurring for receptors with intermediate abundances. The experimental data is based on receptor abundance measured by RNAseq which is a proxy for counting OSN numbers (Ibarra-Soria et al., 2017). In our model, the distinction between receptor numbers and OSN numbers is immaterial because a change in the number of receptors expressed per neuron has the same effect as a change in neuron numbers. In general, additional experiments are needed to measure both the number of receptors per neuron and the number of neurons for each receptor type.

A framework for a quantitative test

Given detailed information regarding the affinities of olfactory receptors, the statistics of the odor environment, and the size of the olfactory epithelium (through the total number of neurons Ktot), our model makes fully quantitative predictions for the abundances of each OSN type. Existing experiments (e.g. Ibarra-Soria et al., 2017) do not record necessary details regarding the odor environment of the control group and the magnitude of the perturbation experienced by the exposed group. However, such data can be collected using available experimental techniques. Anticipating future experiments, we provide a Matlab (RRID:SCR_001622) script on GitHub (RRID:SCR_002630) to calculate predicted OSN numbers from our model given experimentally-measured sensing parameters and environment covariance matrix elements (https://github.com/ttesileanu/OlfactoryReceptorDistribution).

Given the huge number of possible odorants (Yu et al., 2015), the sensing matrix of affinities between all receptor types in a species and all environmentally relevant odorants is difficult to measure. One might worry that this poses a challenge for our modeling framework. One approach might be to use low-dimensional representations of olfactory space (e.g. Koulakov et al., 2011; Snitz et al., 2013), but there is not yet a consensus on the sufficiency of such representations. For now, we can ask how the predictions of our model change upon subsampling: if we only know the responses of a subset of receptors to a subset of odorants, can we still accurately predict the OSN numbers for the receptor types that we do have data for? Figure 7a and b show that such partial data do lead to robust statistical predictions of overall receptor abundances.

Robustness of optimal receptor distribution to subsampling of odorants and receptor types.

Robustness in the prediction is measured as the Pearson correlation between the predicted OSN numbers with complete information, and after subsampling. (a) Robustness of OSN abundances as a function of the fraction of receptors removed from the sensing matrix. Given a full sensing matrix (in this case a 24 × 110 matrix based on Drosophila data (Hallem and Carlson, 2006)), the abundances of a subset of OSN types were calculated in two ways. First, the optimization problem from Equation (7) was solved including all the OSN types and an environment with a random covariance matrix (see Figure 5). Then a second optimization problem was run in which a fraction of the OSN types were removed. The optimal neuron counts Ki obtained using the second method were then compared (using the Pearson correlation coefficient) against the corresponding numbers Ki from the full optimization. The shaded area in the plot shows the range between the 20th and 80th percentiles for the correlation values obtained in 10 trials, while the red curve is the mean. A new subset of receptors to be removed and a new environment covariance matrix were generated for each sample. (b) Robustness of OSN abundances as a function of the fraction of odorants removed from the environment, calculated similarly to panel a except now a certain fraction of odorants was removed from the environment covariance matrix, and from the corresponding columns of the sensing matrix.

https://doi.org/10.7554/eLife.39279.009

First steps toward a dynamical model in mammals

We have explored the structure of olfactory receptor distributions that code odors efficiently, that is are adapted to maximize the amount of information that the brain gets about odors. The full solution to the optimization problem, Equation (7), depends in a complicated nonlinear way on the receptor affinities S and covariance of odorant concentrations Γ. The distribution of olfactory receptors in the mammalian epithelium, however, must arise dynamically from the pattern of apoptosis and neurogenesis (Calof et al., 1996). At a qualitative level, in the efficient coding paradigm that we propose, the receptor distribution is related to the statistics of natural odors, so that the life cycle of neurons would have to depend dynamically on olfactory experience. Such modulation of OSN lifetime by exposure to odors has been observed experimentally (Santoro and Dulac, 2012; Zhao et al., 2013) and could, for example, be mediated by feedback from the bulb (Schwob et al., 1992).

To obtain a dynamical model, we started with a gradient ascent algorithm for changing receptor numbers, and modified it slightly to impose the constraints that OSN numbers are non-negative, Ka0, and their sum Ktot=aKa is bounded (details in Appendix 5). This gives

(9) dKadt=α{KaλKa2σa2(R1)aaKa2},

where α is a learning rate, σa2 is the noise variance for receptor type a, and R is the covariance matrix of glomerular responses,

(10) Rab=rarb-rarb,

with the angle brackets denoting ensemble averaging over both odors and receptor noise. In the absence of the experience-related term (R-1)aa, the dynamics from Equation (9) would be simply logistic growth: the population of OSNs of type a would initially grow at a rate α, but would saturate when Ka=1/λ because of the population-dependent death rate λKa. In other words, the quantity M/λ sets the asymptotic value for the total population of sensory neurons, KtotM/λ, with M being the number of receptor types.

Because of the last term in Equation (9), the death rate in our model is influenced by olfactory experience in a receptor-dependent way. In contrast, the birth rate is not experience-dependent and is the same for all OSN types. Indeed, in experiments, the odor environment is seen to have little effect on receptor choice, but does modulate the rate of apoptosis in the olfactory epithelium (Santoro and Dulac, 2012). Our results suggest that, if olfactory sensory neuron lifetimes are appropriately anti-correlated with the inverse response covariance matrix, then the receptor distribution in the epithelium can converge to achieve optimal information transfer to the brain.

The elements of the response covariance matrix Rab could be estimated by temporal averaging of co-occurring glomerular activations via lateral connections between glomeruli (Mori et al., 1999). Performing the inverse necessary for our model is more intricate. The computations could perhaps be done by circuits in the bulb and then fed back to the epithelium through known mechanisms (Schwob et al., 1992),

Within our model, Figure 8a shows an example of receptor numbers converging to the optimum from random initial values. The sensing matrix used here is based on mammalian data (Saito et al., 2009) and we set the total OSN number to Ktot=2000. The environment covariance matrix is generated using the random procedure described earlier (details in Appendix 4). We see that some receptor types take longer than others to converge (the time axis is logarithmic, which helps visualize the whole range of convergence behaviors). Roughly speaking, convergence is slower when the final OSN abundance is small, which is related to the fact that the rate of change dKa/dt in Equation (9) vanishes in the limit Ka0. For the same reason, OSN populations that start at a very low level also take a long time to converge.

Convergence in our dynamical model.

(a) Example convergence curves in our dynamical model showing how the optimal receptor distribution (orange diamonds) is reached from a random initial distribution of receptors. Note that the time axis is logarithmic. (b) Convergence curves when starting close to the optimal distribution from one environment (blue diamonds) but optimizing for another. A small, random deviation from the optimal receptor abundance in the initial environment was added (see text).

https://doi.org/10.7554/eLife.39279.010

In Figure 8b, we show convergence to the same final state, but this time starting from a distribution that is not random but was optimized for a different environment. The initial and final environments are the same as the two environments used in the previous section to compare the simulations to experimental findings (Figure 6b). Interestingly, many receptor types actually take longer to converge in this case compared to the random starting point, perhaps because there are local optima in the landscape of receptor distributions. Given such local minima, stochastic fluctuations will allow the dynamics to reach the global optimum more easily. In realistic situations, there are many sources of such variability, for example, sampling noise due to the fact that the response covariance matrix R must be estimated through stochastic odor encounters and noisy receptor readings. In fact, in Figure 8b, we added a small amount of noise (corresponding to ±0.05Ktot/M) to the initial distribution of receptors to improve convergence rates.

Discussion

We built a model for the distribution of receptor types in the olfactory epithelium that is based on efficient coding, and assumes that the abundances of different receptor types are adapted to the statistics of natural odors in a way that maximizes the amount of information conveyed to the brain by glomerular responses. This model predicts a non-uniform distribution of receptor types in the olfactory epithelium, as well as reproducible changes in the receptor distribution after perturbations to the odor environment. In contrast to other applications of efficient coding, our model operates in a regime in which there are significant correlations between sensors because the adaptation of OSN abundances occurs upstream of the brain circuitry that can decorrelate olfactory responses. In this regime, OSN abundances depend on the full correlation structure of the inputs, leading to predictions that are context-dependent in the sense that whether the abundance of a specific receptor type goes up or down due to a shift in the environment depends on the global context of the responses of all the other receptors. All these striking phenomena have been observed in recent experiments and had not been explained prior to this study.

In our framework, the sensitivity of the receptor distribution to changes in odor statistics is affected by the tuning of the olfactory receptors, with narrowly tuned receptors being more readily affected by such changes than broadly tuned ones. The model also predicts that environments that differ in the identity of the odors that are present will lead to greater deviations in the optimal receptor distribution than environments that differ only in the statistics with which these odors are encountered. Likewise, the model broadly predicts a monotonic relationship between the number of receptor types found in the epithelium and the total number of olfactory sensory neurons, all else being equal.

A detailed test of our model requires more comprehensive measurements of olfactory environments than are currently available. Our hope is that studies such as ours will spur interest in measuring the natural statistics of odors, opening the door for a variety of theoretical advances in olfaction, similar to what was done for vision and audition. Such measurements could for instance be performed by using mass spectrometry to measure the chemical composition of typical odor scenes. Given such data, and a library of receptor affinities, our GitHub (RRID:SCR_002630) online repository provides an easy-to-use script that uses our model to predict OSN abundances. For mammals, controlled changes in environments similar to those in Ibarra-Soria et al. (2017) could provide an even more stringent test for our framework.

To our knowledge, this is the first time that efficient coding ideas have been used to explain the pattern of usage of receptors in the olfactory epithelium. Our work can be extended in several ways. OSN responses can manifest complex, nonlinear responses to odor mixtures. Accurate models for how neurons in the olfactory epithelium respond to complex mixtures of odorants are just starting to be developed (e.g. Singh et al., 2018), and these can in principle be incorporated in an information-maximization procedure similar to ours. More realistic descriptions of natural odor environments can also be added, as they amount to changing the environmental distribution P(𝐜). For example, the distribution of odorants could be modeled using a Gaussian mixture, rather than the normal distribution used in this paper to enable analytic calculations. Each Gaussian in the mixture would model a different odor object in the environment, more closely approximating the sparse nature of olfactory scenes discussed in, for example, Krishnamurthy et al. (2017).

Of course, the goal of the olfactory system is not simply to encode odors in a way that is optimal for decoding the concentrations of volatile molecules in the environment, but rather to provide an encoding that is most useful for guiding future behavior. This means that the value of different odors might be an important component shaping the neural circuits of the olfactory system. In applications of efficient coding to vision and audition, maximizing mutual information, as we did, has proved effective even in the absence of a treatment of value (Laughlin, 1981; Atick and Redlich, 1990; van Hateren, 1992a; Olshausen and Field, 1996; Simoncelli and Olshausen, 2001; Fairhall et al., 2001; Lewicki, 2002; Ratliff et al., 2010; Garrigan et al., 2010; Tkacik et al., 2010; Hermundstad et al., 2014; Palmer et al., 2015; Salisbury and Palmer, 2016). However, in general, understanding the role of value in shaping neural circuits is an important experimental and theoretical problem. To extend our model in this direction, we would replace the mutual information between odorant concentrations and glomerular responses by a different function that takes into account value assignments (see, e.g. Rivoire and Leibler, 2011). It could be argued, though, that such specialization to the most behaviorally relevant stimuli might be unnecessary or even counterproductive close to the sensory periphery. Indeed, a highly specialized olfactory system might be better at reacting to known stimuli, but would be vulnerable to adversarial attacks in which other organisms take advantage of blind spots in coverage. Because of this, and because precise information regarding how different animals assign value to different odors is scarce, we leave these considerations for future work.

One exciting possibility suggested by our model is a way to perform a first causal test of the efficient coding hypothesis for sensory coding. Given sufficiently detailed information regarding receptor affinities and natural odor statistics, experiments could be designed that perturb the environment in specified ways, and then measure the change in olfactory receptor distributions. Comparing the results to the changes predicted by our theory would provide a strong test of efficient coding by early sensory systems in the brain.

Materials and methods

Software and data

Request a detailed protocol

The code (written in Matlab, RRID:SCR_001622) and data that we used to generate all the results and figures in the paper is available on GitHub (RRID:SCR_002630), at https://github.com/ttesileanu/OlfactoryReceptorDistribution (Teşileanu, 2019; copy archived at https://github.com/elifesciences-publications/OlfactoryReceptorDistribution).

Appendix 1

Choice of sensing matrices and receptor noise variances

We used three types of sensing matrices in this study. Two were based on experimental data, one using fly receptors (Hallem and Carlson, 2006), and one using mouse and human receptors (Saito et al., 2009); and another type of sensing matrix was based on randomly-generated receptor affinity profiles. These can all be either directly downloaded from our repository on GitHub (RRID:SCR_002630), https://github.com/ttesileanu/OlfactoryReceptorDistribution, or generated using the code available there.

Fly sensing matrix

Some of our simulations used a sensing matrix based on Drosophila receptor affinities, as measured by Hallem and Carlson (Hallem and Carlson, 2006). This includes the responses of 24 of the 60 receptor types in the fly against a panel of 110 odorants, measured using single-unit electrophysiology in a mutant antennal neuron. We used the values from Table S1 in (Hallem and Carlson, 2006) for the sensing matrix elements. To estimate receptor noise, we used the standard deviation measured for the background firing rates for each receptor (data obtained from the authors). The fly data has the advantage of being more complete than equivalent datasets in mammals.

Mammalian sensing matrix

When comparing our model to experimental findings from (Ibarra-Soria et al., 2017), we used a sensing matrix based on mouse and human receptor affinity data from (Saito et al., 2009). This was measured using heterologous expression of olfactory genes, and tested in total 219 mouse and 245 human receptor types against 93 different odorants. However, only 49 mouse receptors and 10 human receptors exhibited detectable responses against any of the odorants, while only 63 odorants activated any receptors. From the remaining 59 × 63 = 3717 receptor–odorant pairs, only 335 (about 9%) showed a response, and were assayed at 11 different concentration points. In this paper, we used the values obtained for the highest concentration (3 mM).

Random sensing matrices

Appendix 1—figure 1
Heat maps of the types of sensing matrices used in our study.

The color scaling is arbitrary, with red representing positive values and blue negative values. ‘Fly’ and ‘mammal’ are the sensing matrices based on Drosophila receptor affinities (Hallem and Carlson, 2006), and mouse and human affinities (Saito et al., 2009), respectively. ‘Fly scrambled’ and ‘mammal scrambled’ are permutations of the ‘fly’ and ‘mammal’ matrices in which elements are arbitrarily scrambled. ‘Tuning’, ‘gaussian’, ‘binary’, and ‘signed’ are random sensing matrix generated as described in the Random sensing matrices section.

https://doi.org/10.7554/eLife.39279.013

The random sensing matrices matrices used in the main text (and referred to as ‘tuning’ in some of the figures in this Appendix) were generated as follows. We started by treating the column (i.e. odorant) index as a one-dimensional odor coordinate with periodic boundary conditions. We normalized the index to a coordinate x running from 0 to 1. For each receptor, we then chose a center x0 along this line, corresponding to the odorant to which the receptor has maximum affinity, and a standard deviation σ, corresponding to the tuning width of the receptor. Note that both x0 and σ are allowed to be real numbers, so that the maximum affinity can occur at a position that does not correspond to any particular odorant from the sensing matrix.

To obtain a bell-like response profile for the receptors while preserving the periodicity of the odor coordinate we chose, we defined the response affinity to odorant x by

(11) ϕ(x)=exp[-12(2sinπ(x-x0)σ)2].

This expression can be obtained by imagining odorant space as a circle embedded in a two-dimensional plane, with odorant x mapped to an angle θ=2πx on this circle, and considering a Gaussian response profile in this two-dimensional embedding space. This is simply a convenient choice for treating odor space in a way that eliminates artifacts at the edges of the sensing matrix, and we do not assign any significance to the particular coordinate system that we used.

The centers x0 for the Gaussian profiles for each of the receptors were chosen uniformly at random, and the tuning width σ was either a fixed parameter for the entire sensing matrix, or was uniformly sampled from an interval. Before using the matrices we randomly shuffled the columns to remove the dependencies between neighboring odorants, and finally added some amount of random Gaussian noise (mean centered and with standard deviation 1/200). The overall scale of the sensing matrices was set by multiplying all the affinities by 100, which yielded values comparable to the measured firing rates in fly olfactory neurons (Hallem and Carlson, 2006).

For the robustness results below we also generated random matrices in additional ways: (1) ‘gaussian’: drawing the affinities from a Gaussian distribution (with zero mean and standard deviation 2), (2) ‘bernoulli’: drawing from a Bernoulli distribution (with elements equal to 5 with probability 30%, and 0 with probability 70%), (3) ‘signed’: drawing from a Bernoulli distribution followed by choosing the sign (so that elements are 5 with probability 15%, –5 with probability 15%, and 0 with probability 70%); and (4, 5) ‘fly scrambled’ and ‘mammal scrambled’: scrambling the elements in the fly and mammalian datasets (across both odorants and receptors).

Robustness of results to changing the sensing matrix

Our qualitative results are robust across a variety of different choices for the sensing matrix (Appendix 1—figure 1). For instance, the optimal number of receptor types expressed in a fraction of the OSN population larger than 1% grows monotonically with the total number of neurons (Appendix 1—figure 2). Similarly, the general effect that environment change has on optimal OSN numbers, with less abundant receptor types changing more than more abundant ones, is generic across different choices of sensing matrices (Appendix 1—figure 3).

Appendix 1—figure 2
Effect of sensing matrix on the dependence between the number of receptor types expressed in the optimal distribution and the total number of OSNs.

The labels refer to the sensing matrices from Appendix 1—figure 1.

https://doi.org/10.7554/eLife.39279.014
Appendix 1—figure 3
Different choices of sensing matrix lead to similar behavior of optimal receptor distribution under environment change.

The labels refer to the sensing matrices from Appendix 1—figure 1, whose scales were adjusted to ensure that the simulations are in a low SNR regime. The blue (orange) diamonds on the left (right) side of each plot represent the optimal OSN abundances in environment 1 (environment 2). The two environment covariance matrices are obtained by starting with a background randomly-generated covariance matrix (as described below) and adding a large amount of variance to two different sets of 10 odorants (out of 110 for most sensing matrices, and 63 for the ‘mouse’ and ‘mouse scrambled’ ones).

https://doi.org/10.7554/eLife.39279.015
https://doi.org/10.7554/eLife.39279.012

Appendix 2

Mathematical derivations

Deriving the expression for the mutual information

In the main text we assume a Gaussian distribution for odorant concentrations and approximate receptor responses as linear with additive Gaussian noise, Equation (2). Thus it follows that the marginal distribution of receptor responses is also Gaussian. Taking averages of the responses, ra, and of products of responses, rarb, over both the noise distribution and the odorant distribution, and using Equation (2) from the main text, we get a normal distribution of responses:

(12) r𝒩(r0,R),

where the mean response vector 𝐫0 and the response covariance matrix R are given by

(13) r0=KSc0,R=[Σ+KQ]K,

where S is the sensing matrix, K is a diagonal matrix of OSN abundances, and Σ is the covariance matrix of receptor noises, Σ=diag(σ12,,σM2) (also see the main text). Here, as in Equation (1) in the main text, 𝐜0 is the mean concentration vector, Γ is the covariance matrix of odorant concentrations, and we use the overlap matrix from Equation (5) in the main text, Q=SΓST. Note that in the absence of noise (Σ=0), the response matrix is simply the overlap matrix Q modulated by the number of OSNs of each type, Rnoiseless=KQK.

The joint probability distribution over responses and concentrations, P(𝐫,𝐜), is itself Gaussian. To calculate the corresponding covariance matrix, we need the covariances between responses, rarb-rarb, which are just the elements of the response matrix R from Equation (13) above; and between concentrations, cicj-cicj, which are the elements of the environment covariance matrix Γ, Equation (1) in the main text. In addition, we need the covariances between responses and concentrations, raci-raci, which can be calculated using Equation (2) from the main text. We get:

(14) (r,c)𝒩((r0,c0),Λ),

with

(15) Λ=(RKSΓΓSTKΓ).

The mutual information between responses and odors is then given by (see below for a derivation):

(16) I(𝐫,𝐜)=12logdetΓdetRdetΛ.

From Equation (13) we have

(17) detR=det(Σ+KQ)detK,

and from Equation (15),

(18) detΛ=det(RKSΓΓSTKΓ)=detΓdet(RKSΓΓ1ΓSTK)=detΓdet(ΣK+KQKKSΓSTK)=detΓdetΣK,

where we used Equation (13) again, and employed Schur’s determinant identity (derived below). Thus,

(19) I(r,c)=12logdetΓdet(Σ+KQ)detKdetΓdetΣdetK=12logdet(I+Σ1KQ)

This recovers the result quoted in the main text, Equation (4).

By using the fact that the diagonal matrices K and Σ-1 commute, we can also write:

(20) I(r,c)=12logdet(Σ1/2Σ1/2+Σ1KQ)=12logdetΣ1/2(Σ1/2+Σ1/2KQ)=12logdet(Σ1/2+Σ1/2KQ)Σ1/2=12logdet(I+KΣ1/2QΣ1/2)=12logdet(I+KQ~).

This shows that the mutual information can be written in terms of a symmetric ‘SNR matrix’ Q~=Σ-1/2QΣ-1/2. This is simply the covariance matrix of responses in which each response was normalized by the noise variance of the corresponding receptor.

Schur’s determinant identity

The identity for the determinant of a 2 × 2 block matrix that we used in Equation (18) above can be derived in the following way. First, note that

(21) (ABCD)=(IB0D)(ABD1C0D1CI).

Now, from the definition of the determinant it can be seen that

(22) det(AB0I)=det(A0CI)=detA,

since all the products involving elements from the off-diagonal blocks must necessarily also involve elements from the 0 matrix. Thus, taking the determinant of Equation (21), we get the desired identity

(23) det(ABCD)=detDdet(A-BD-1C).
Mutual information for Gaussian distributions

The expression from Equation (16) for the mutual information I(𝐫,𝐜) can be derived by starting with the fact that I is equal to the Kullback-Leibler (KL) divergence from the joint distribution P(𝐫,𝐜) to the product distribution P(𝐫)P(𝐜). As a first step, let us calculate the KL divergence between two multivariate normals in n dimensions:

(24) D=DKL(p||q)=p(x)logp(x)q(x)dx,

where

(25) p(x)=1(2π)ndetAexp[12(xμA)TA1(xμA)],q(x)=1(2π)ndetBexp[12(xμB)TB1(xμB)].

Plugging the distribution functions into the logarithm, we have

(26) D=12logdetBdetA+12p(x)[(xμB)TB1(xμB)(xμA)TA1(xμA)]dx,

where the normalization property of p(𝐱) was used. Using also the definition of the mean and of the covariance matrix, we have

(27a)p(x)xidx=μA,i,(27b)p(x)xixjdx=Aij,

which implies

(28) p(x)(xμ)TC1(xμ)dx=Tr(AC1)+(μAμ)TC1(μAμ)

for any vector μ and matrix C. Plugging this into Equation (26), we get

(29) D=12logdetBdetA+12[Tr(AB1)n]+12(μAμB)TB1(μAμB).

We can now return to calculating the KL divergence from P(𝐫,𝐜) to P(𝐫)P(𝐜). Note that, since P(𝐫) and P(𝐜) are just the marginals of the joint distribution, the means of the variables are the same in the joint and in the product, so that the last term in the KL divergence vanishes. The covariance matrix for the product distribution is

(30) Λprod=(R00Γ),

so the product inside the trace becomes

(31) ΛΛprod1=(RΓ)(R100Γ1)=(II),

where the entries replaced by '' need not be calculated because they drop out when the trace is taken. The sum of the dimensions of R and Γ is equal to the dimension, n, of Λ, so that the term involving the trace from Equation (29) also drops out, leaving us with the final result:

(32) I=DKL(p(𝐫,𝐜)p(𝐫)p(𝐜))=12logdetRdetΓdetΛ,

which is the same as Equation (16) that was used in the previous section.

Deriving the KKT conditions for the information optimum

In order to find the optimal distribution of olfactory receptors, we must maximize the mutual information from Equation (4) in the main text, subject to constraints. Let us first calculate the gradient of the mutual information with respect to the receptor numbers:

(33) IKa=12Kalogdet(I+KQ~)=12KaTrlog(I+KQ~).

The cyclic property of the trace allows us to use the usual rules to differentiate under the trace operator, so we get

(34) IKa=12Tr[KKa(Q~1+K)1]=12b,c(Kbδbc)Ka(Q~1+K)ca1=12(Q~1+K)aa1.

We now have to address the constraints. We have two kinds of constraints: an equality constraint that sets the total number of neurons, Ka=Ktot; and inequality constraints that ensure that all receptor abundances are non-negative, Ka0. This can be done using the Karush-Kuhn-Tucker (KKT) conditions, which require the introduction of Lagrange multipliers: λ for the equality constraint, and μa for the inequality constraints. At the optimum, we must have:

(35) IKa=12λKa(bKbK tot)bμbKaKb=λμa,

where the Lagrange multipliers for the inequality constraints, μa, must be non-negative, and must vanish unless the inequality is saturated:

(36) μa0,μaKa=0.

Put differently, if Ka>0, then μa=0 and I/Ka=λ/2; while if Ka=0, then I/Ka=λ/2-μaλ/2. Combined with Equation (34), this yields

(37) {(Q~1+K)aa1=λ,if Ka>0, or(Q~1+K)aa1<λ,if Ka=0.

The magnitude of λ is set by imposing the normalization condition Ka=Ktot.

The many-neuron approximation

Suppose we are in the regime in which the total number of neurons is large, and in particular, each of the abundances Ka is large. Then we can perform an expansion of the expression appearing in the KKT equations from Equation (37):

(38) (Q~1+K)1=K1(I+Q~1K1)1K1(IQ~1K1),

whose aa component is

(39) (Q~1+K)aa11Ka[1Q~aa1Ka]=1Ka[1σa2Qaa1Ka],

where we used Q~=Σ-1/2QΣ1/2. With the notation

(40) A=Q-1,

we can plug into Equation (37) and get

(41) λ1Ka-σa2AaaKa2.

This quadratic equation has only one large solution, and it is given approximately by

(42) Ka1λ-σa2Aaa.

Combined with the normalization constraint, aKa=Ktot, this recovers Equation (8) from the main text.

Optimal distribution for uncorrelated responses

When the overlap matrix Q=SΓST is diagonal, the optimization problem simplifies considerably. By plugging Q=diag(Qaa) into Equation (4) in the main text, we find

(43) I(r,c)=12logdet(I+Σ1KQ)=12logdetdiag(1+KaQaa/σa2)=12alog(1+KaQaaσa2).

We can again use the KKT approach and add Lagrange multipliers λ and μa for enforcing the equality and inequality constraints, respectively,

(44) I¯=12alog(1+KaQaaσa2)-λaKa-μaKa,

and take derivatives with respect to Ka to find the optimum,

(45) 0=I¯Ka=121Ka+σa2/Qaa-λ-μa,

with the condition that μa0 and either μa or Ka must vanish, μaKa=0. This leads to

(46) Ka=max(0,12λ-σa2Qaa),

showing that receptor abundances grow monotonically with Qaa/σa2. This explains the correlation between OSN abundances Ka and receptor SNRs Qaa/σa2 when the responses are uncorrelated or weakly correlated.

First receptor type to be activated

When there is only one active receptor, Kx=Ktot, Kax=0, the KKT conditions from Equation (37) are automatically satisfied. The receptor that is activated first can be found in this case by calculating the information I(𝐫,𝐜) using Equation (4) from the main text while assuming an arbitrary index x for the active receptor, and then finding x=x* that yields the maximum value. Without loss of generality, we can permute the receptor indices such that x=1. Using Equation (19) and setting K1=Ktot, we have:

(47) I1(r,c)=12Trlog(I+KΣ1Q)=12logdet(I+KΣ1Q)=12log|1+KtotQ11/σ12KtotQ12/σ12KtotQ1M/σ12010001|=12log(1+KtotQ11σ12).

Thus, in general, the information when only receptor type x is activated is given by

(48) Ix(𝐫,𝐜)=12log(1+KtotQxxσx2),

which implies that information is maximized when x matches the receptor corresponding to the highest ratio between the diagonal value of the overlap matrix Q and the receptor variance in that channel σx2; that is the receptor that maximizes the signal-to-noise ratio:

(49) x=argmaxQxxσx2=argmaxQ~xxargmaxSNRx.

Another way to think of this result is by employing the usual expression for the capacity of a single Gaussian channel, and then finding the channel that maximizes this capacity.

Invariance of mutual information under invertible and differentiable transformations

Consider the mutual information between two variables rRM and cRN:

(50) I(𝐫,𝐜)=dMrdNcP(𝐫,𝐜)log[P(𝐫|𝐜)P(𝐫)].

Let us now define two different variables that depend on 𝐫 and 𝐜 in an invertible and continuously-differentiable (but in general nonlinear) way,

(51) 𝐲=𝐲(𝐫),𝐱=𝐱(𝐜).

The joint probability distribution for the new variables is related to the joint distribution of the original variables through the Jacobian determinants,

(52) P(y,x)=P(r,c)detJrdetJc,

where

(53) Jr=(r1y1r1yMrMy1rMyM),Jc=(c1x1c1xNcNx1cNxN).

For the marginals, we have

(54) P(y)=dNxP(y,x)=dNc1detJcP(r,c)detJrdetJc=P(r)detJr,P(x)=dMyP(y,x)=dMr1detJrP(r,c)detJrdetJc=P(c)detJc,

where we used the standard substitution formula for multiple integrals. We can now calculate the mutual information between the new variables:

(55) I(y,x)=dMydNxP(y,x)log[P(y|x)P(y)]=dMydNxP(y,x)log[P(y,x)P(y)P(x)]=dMrdNc1detJr1detJcP(r,c)detJrdetJclog[P(r,c)detJrdetJcP(r)detJrP(c)detJc]=dMrdNcP(r,c)log[P(r,c)P(r)P(c)]I(r,c).

Thus, invertible and continuously-differentiable transformations of either the response variables 𝐫 or the concentration variables 𝐜 in our model leave the mutual information unchanged.

Multiple glomeruli with the same affinity profile

In mammals, the axons from neurons expressing a given receptor type can project to anywhere from 2 to 16 different glomeruli. Here we show that in our setup, information transfer only depends on the total number of neurons of a given type, and not on the number of glomeruli to which they project.

The key observation is that mutual information, Equation (3) in the main text, is unchanged when the responses and/or concentrations are modified by invertible transformations (see previous section). In particular, linear transformations of the responses do not affect the information values. Suppose that we have a case in which two receptors p and q have identical affinities, so that Spi=Sqi for all odorants i. We can then form linear combinations of the corresponding glomerular responses,

(56) r+=rp+rq=(Kp+Kq)iSpici+ηpKp+ηqKq,r=KqrpKprq=ηpKqKpηqKpKq,

and consider a transformation that replaces (rp,rq) with (r+,r-). Since r- is pure noise, that is it does not depend on the concentration vector 𝐜 in any way, it has no effect on the mutual information.

We have thus shown that the amount of information that M receptor types contain about the environment when two of the receptors have identical affinity profiles is the same as if there were only M-1 receptor types. The two redundant receptors can be replaced by a single one with an abundance equal to the sum of the abundances of the two original receptors. The sum of two Gaussian variables with the same mean is Gaussian itself and has a variance equal to the sum of the variances of the two variables, meaning that the noise term η+ in the r+ response has variance Kpσp2+Kqσq2Kp+Kq.

https://doi.org/10.7554/eLife.39279.016

Appendix 3

A nonlinear response example

Estimating the mutual information numerically

Consider an extension of our model in which the responses depend in a nonlinear way on concentrations, but are still subject to pure Gaussian noise:

(57) r¯a=fa(𝐜)+1Kaηa,ηa𝒩(0,σa2).

Note that here we are calculating the average OSN response r¯a=ra/Ka, while in the main text we used the total response ra. As far as mutual information calculations are concerned, the difference between r¯a and ra does not matter, as they are related by an invertible transformation.

Unless the functions fa are linear, a closed-form solution for the mutual information between concentrations and responses cannot be found. It is thus necessary to calculate the mutual information integral numerically. We can still do part of the calculation analytically, though:

(58) I(r¯,c)=dMr¯dNcP(r¯,c)logP(r¯|c)P(r¯)=dMr¯P(r¯)logP(r¯)+dNcP(c)dMr¯P(r¯|c)logP(r¯|c).

In our case, P(r¯|c) is a multivariate Gaussian distribution whose covariance matrix is ΣK1 and does not depend on the concentrations. This means that the 𝐜 integral in the second term can be performed independently of the 𝐫¯ integral, in which case it drops out of the calculation, as it is equal to 1. The 𝐫¯ integral is simply the negative entropy of a multivariate Gaussian distribution, and is thus equal to

(59) dMr¯P(r¯|c)logP(r¯|c)=12logdetΣK1M2log2πe=12alog(2πeσa2Ka).

The first term in Equation (58) is the entropy of the responses, which needs to be calculated numerically. We use a histogram method, in which we split the space of possible responses along each dimension into bins of equal size Δ. We then estimate the probability in each bin. If i1iM indexes the bins, we can then think of the response distribution as a discrete PDF Pi1iM, and we can estimate the entropy using

(60) H(𝐫¯)=-dMr¯P(𝐫¯)logP(𝐫¯)i1iMPi1iMlogPi1iMΔM.

In this approach, the challenge remains to estimate the PDF of the responses,

(61) P(r¯)=dNcP(c)P(r¯|c)=1(2π)MdetΣK1dNcP(c)exp[12(r¯f(c))TKΣ1(r¯f(c))]

where 𝐟 is the vector of response functions 𝐟=(f1,,fM). We do this using a sampling technique based on the law of large numbers. Given n sample concentration vectors ci drawn from the probability distribution P(𝐜), we have

(62) P(r¯)=EP(c){1(2π)MdetΣK1exp[12(r¯f(c))TKΣ1(r¯f(c))]}1ni1(2π)MdetΣK1exp[12(r¯f(ci))TKΣ1(r¯f(ci))],

where EP(c){} denotes the expected value under the distribution of concentrations. We use this formula to estimate the histogram elements Pi1iM and then use Equation (60) to estimate the response entropy H(𝐫¯). We then plug H(𝐫¯) and Equation (59) into Equation (58) to find the mutual information. Note that we have not assumed anything about the natural distribution of odor concentrations, P(𝐜), so that we are not restricted to Gaussian environments with this method.

Competitive binding model

The way in which olfactory neurons respond to arbitrary mixtures of odorants is not completely understood. However, simple kinetic models in which different odorant molecules compete for the same receptor binding site have been shown to capture much of the observed behavior (Singh et al., 2018). In such models, the activation of an OSN of type a in response to a set of odorants with concentrations ci is given by

(63) ra=ieaici/EC50ai1+ici/EC50ai,

where EC50ai is the concentration of odorant i for which the response for the OSN of type a reaches half its maximum, and eai is the maximum response elicited by odorant i in an OSN of type a.

Results from a toy problem

The computation time from the method outlined above for calculating mutual information grows exponentially with the dimensionality M of the response space. Additionally, it grows linearly with the number n of samples drawn from the odor distribution, which in turn needs to grow exponentially with the number N of odorants we are considering in order to sample concentration space sufficiently well. For this reason, large-scale simulations involving this method are infeasible.

Thus we focused on a simple example with M=3 receptors and N=15 odorants. We used an arbitrary subset of elements from the fly sensing matrix and a pair of randomly-generated non-overlapping environments (Appendix 3—figure 1) to first calculate the optimal receptor distribution using the linear method described in the main text (Appendix 3—figure 2, top). We chose the scale of the environment covariance matrices to get a variability in the responses of around 1, large enough to enter the nonlinear regime when using the nonlinear response function (described below). We then set the total neuron population to Ktot=200, which put us in an intermediate SNR regime in which all the receptor types were used in the optimal distribution, but their abundances were different (Appendix 3—figure 2, top).

Appendix 3—figure 1
Sensing matrix and environment covariance matrices used in our toy problem involving a non-linear response function.
https://doi.org/10.7554/eLife.39279.018
Appendix 3—figure 2
Comparing results from the linear model in the main text to results based on a nonlinear response function.

The top row shows the optimal receptor distribution obtained using the linear model for a system with three receptor types and 15 odorants. The middle row shows how the estimated mutual information varies with OSN abundances in a nonlinear model based on a competitive binding response function. The bottom rows shows the optimal receptor distribution from the nonlinear model, obtained by finding the cells in the middle row in which the information is maximized.

https://doi.org/10.7554/eLife.39279.019

In the linear approximation, we found that receptor 1 is under-represented in environment 1, while in environment 2 receptor 3 has very low abundance. We wanted to see how much this result is affected by a nonlinear response function. We used a competitive binding model as described above in which the matrix of EC50 values was taken equal to the sensing matrix used in the linear case, and the efficacies eai were all set to 1:

(64) ra=iSaici1+iSaici+1Kaηa.

To calculate the mutual information between responses and concentrations for a fixed choice of neuron abundances Ka, we used the procedure outlined above with 20 bins between –0.75 and 1.5 for each of the response dimensions. We sampled n=104 concentration vectors to build the response histogram. We calculated the information values in both environments at a 10 × 10 grid of OSN abundances (Appendix 3—figure 2, middle row), and found the cell which maximized the information. The OSN abundances at this maximum (Appendix 3—figure 2, bottom) show the same pattern of change as we found in the linear approximation, with receptors 1 and 3 exchanging places as least abundant in the OSN population.

https://doi.org/10.7554/eLife.39279.017

Appendix 4

Random environment matrices

Generating random covariance matrices

Generating plausible olfactory environments is difficult because so little is known about natural odor scenes. However, it is reasonable to expect that there will be some strong correlations. This could, for instance, be due to the fact that an animal’s odor is composed of several different odorants in fixed proportions, and thus the concentrations with which these odorants are encountered will be correlated.

The most straightforward way to generate a random covariance matrix would be to take the product of a random matrix with its transpose, Γ=MMT. This automatically ensures that the result is positive (semi)definite. The downside of this method is that the resulting correlation matrices tend to cluster close to the identity (assuming that the entries of M are chosen i.i.d.). One way to avoid this would be to use matrices M that have fewer columns than rows, which indeed leads to non-trivial correlations in Γ. However, this only generates rank-deficient covariance matrices which means that odorant concentrations are constrained to live on a lower-dimensional hyperplane. This is too strong a constraint from a biological standpoint.

To avoid these shortcomings, we used a different approach for generating random covariance matrices. We split the process into two parts: we first generated a random correlation matrix by the method described below, in which all the variances (i.e. the diagonal elements) were equal to 1; next we multiplied each row and corresponding column by a standard deviation drawn from a lognormal distribution.

In order to generate random correlation matrices, we used a modified form of an algorithm based on partial correlations (Lewandowski et al., 2009). The partial correlation between two variables Xi and Xj conditioned on a set of variables L is the correlation coefficient between the residuals Ri and Rj obtained by subtracting the best linear fit for Xi and Xj using all the variables in L. In other words, the partial correlation between Xi and Xj is equal to that part of the correlation coefficient that is not explained by the two variables depending on a common set of explanatory variables, L. In our case the Xi are the concentrations of different odorants in the environment and the partial correlations in question are, for example, the correlation between any pair of the odorants conditioned on the remaining ones. We want to construct the unconditioned correlation matrix between the odor concentrations vectors of the environment. There is an algorithm to construct this matrix that starts by randomly drawing the partial correlation between the first two odorants X1 and X2 conditioned on the rest, and then recursively reducing the size of the conditioning set while generating more random partial correlations until the un-conditioned correlation values are obtained. For details, see Lewandowski et al. (2009).

The specific procedure used in Lewandowski et al. (2009) draws the partial correlation values from beta distributions with parameters depending on the number of elements in the conditioning set L. This is done in order to ensure a uniform sampling of correlation matrices. This, however, is not ideal for our purposes because these samples again tend to cluster close to the identity matrix. A simple modification of the algorithm that provides a tunable amount of correlations is to keep the order of the beta distribution fixed α=β=const (see Stack Exchange, at https://stats.stackexchange.com/q/125020). When the parameter β is large we obtain environments with little correlation structure, while small β values lead to stronger correlations between odorant concentrations. The functions implementing the generation of random environments are available on our GitHub (RRID:SCR_002630) repository at https://github.com/ttesileanu/OlfactoryReceptorDistribution (see environment/generate_random_environment.m and utils/randcorr.m).

Perturbing covariance matrices

When comparing the qualitative results from our model against experiments in which the odor environment changes (Ibarra-Soria et al., 2017), we used small perturbations of the initial and final environments to estimate error bars on receptor abundances. To generate a perturbed covariance matrix, Γ~, from a starting matrix Γ, we first took the matrix square root: a symmetric matrix M, which obeys

(65) Γ=MMTM2.

We then perturbed M by adding normally-distributed i.i.d. values to its elements,

(66) M~ij=Mij+σηij,

and recreated a covariance matrix by multiplying the perturbed square root with its transpose,

(67) Γ~=M~M~T.

This approach ensures that the perturbed matrix Γ~ remains a valid covariance matrix—symmetric and positive-definite—which would not be guaranteed if the random perturbation was added directly to Γ. We chose the magnitude σ of the perturbation so that the error bars in our simulations are of comparable magnitude to those in the experiments.

We used a similar method for generating the results from Figure 3, where we needed to apply the same perturbation to two different environments. Given the environment covariance matrices Γk, with k{1,2}, we took the matrix square root of each environment matrix, Mk=Γk1/2. We then added the same perturbation to both, M~k=Mk+P, then recovered covariance matrices for the perturbed environments by squaring M~k, Γ~k=M~kM~kT. In the examples used in the main text, the perturbation P was a matrix in which only one column was non-zero. The elements in this column were chosen from a Gaussian distribution with zero mean and a standard deviation five times larger than the square root of the median element of Γ1. This choice was arbitrary and was made to obtain a visible change in the optimal receptor abundances between the ‘control’ and ‘exposed’ environments.

Finally, we employed this approach also for generating non-overlapping environments. Given two environments Γ1 and Γ2 and their matrix square roots M1 and M2, we reduced the amount of variance in the first half of M1’s columns and in the second half of M2’s. We did this by dividing those columns by a constant factor f, which in this case we chose to be f=4. We then used the resulting matrices M~k to generate covariance matrices Γ~k=M~kM~kT with largely non-overlapping odors.

https://doi.org/10.7554/eLife.39279.020

Appendix 5

Deriving the dynamical model

To turn the maximization requirement into a dynamical model, we employ a gradient ascent argument. Given the current abundances Ka, we demand that they change in proportion to the corresponding components of the information gradient, plus a Lagrange multiplier to impose the constraint on the total number of neurons:

(68) K˙a=2α(IKaλ)=α[(Q~1+K)aa1λ].

The brain does not have direct access to the overlap matrix Q, but it could measure the response covariance matrix R from Equation (13). Thus, we can write the dynamics as

(69) K˙a=α{[Q~(I+KQ~)1]aaλ}=α{[K1(Σ1/2RK1Σ1/2I)Σ1/2KR1Σ1/2]aaλ}=α{Ka1λ(Σ1/2R1Σ1/2)aa}=α{Ka1λσa2Raa1},

where we used the fact that Σ1/2 and K are diagonal and thus commute. These equations do not yet obey the non-negativity constraint on the receptor abundances. The divergence in the Ka-1 term would superficially appear to ensure that positive abundances stay positive, but there is a hidden quadratic divergence in the response covariance term, Raa-1; see Equation (13). To ensure that all constraints are satisfied while avoiding divergences, we multiply the right-hand-side of Equation (69) by Ka2, yielding

(70) K˙a=α[Ka-Ka2(λ+σa2Raa-1)],

which is the same as Equation (9) from the main text.

If we keep the Lagrange multiplier λ constant, the asymptotic value for the total number of neurons Ktot will depend on the statistical structure of olfactory scenes. If instead we want to enforce the constraint Ka=Ktot for a predetermined Ktot, we can promote λ itself to a dynamical variable,

(71) dλdt=β[aKa-Ktot],

where β is another learning rate. Provided that the dynamics of λ is sufficiently slow compared to that of the neuronal populations Ka, this will tune the experience-independent component of the neuronal death rate until the total population stabilizes at Ktot.

https://doi.org/10.7554/eLife.39279.021

Appendix 6

Interpretation of diagonal elements of the inverse overlap matrix

In the main text we saw that the diagonal elements of the inverse overlap matrix Qaa-1 were related to the abundances of OSNs Ka. Specifically,

(72) Ka1λ-σa2Qaa-1,

where λ is a Lagrange multiplier imposing the constraint on the total number of neurons. As noted around Equation (13) above, the overlap matrix Q is related to the response covariance matrix R: in particular, Q is equal to R when there is a single receptor of each type (Ka=1) and there is no noise (σa=0). That is, the overlap matrix measures the covariances between responses in the absence of noise. This means that its inverse A=Q-1 is effectively a so-called ‘precision matrix’. Diagonal elements of a precision matrix are inversely related to corresponding diagonal elements of the covariance matrix (i.e. the variances), but, as we will see below, they are also monotonically related to parameters that measure how well each receptor response can be linearly predicted from all the others. Since receptor responses that either do not fluctuate much or whose values can be guessed based on the responses of other receptors are not very informative, we would expect that abundances Ka are low when the corresponding diagonal elements of the inverse overlap matrix Aaa are high, which is what we see. In the following we give a short derivation of the connection between the diagonal elements of precision matrices and linear prediction of receptor responses.

Let us work in the particular case in which there is one copy of each receptor and where there is no noise, so that Q=R, that is Qij=rirj-rirj. Without loss of generality, we focus on calculating the first diagonal element of the inverse overlap matrix, A11, where A=Q-1. For notational convenience, we will also denote the mean-centered first response variable by yr1-r1, and the subsequent ones by xara+1-ra+1. Then the covariance matrix Q can be written in block form

(73) Q=(y2y𝐱Ty𝐱M),

where M is

(74) M=𝐱𝐱T,

and 𝐱 is a column vector containing the xa variables. Using the definition of the inverse together with Laplace’s formula for determinants, we get

(75) A11=detMdetQ.

Using the Schur determinant identity (derived above) on the block form (Equation (73)) of the matrix Q,

(76) A11=detMdetMdet[y2yxTM1yx]=1y2yxTM1yx,

where we used the fact that the argument of the second determinant is a scalar.

Now, consider approximating the first response variable y by a linear function of all the others:

(77) y=𝐚T𝐱+q,

where q is the residual. Note that we do not need an intercept term because we mean-centered our variables, y=x=0. Finding the coefficients 𝐚 that lead to a best fit (in the least-squares sense) requires minimizing the variance of the residual, and a short calculation yields

(78) a=argminaq2=argmina(yaTx)2=M1yx,

where M is the same as the matrix defined in Equation (74).

The coefficient of determination ρ2 is defined as the ratio of explained variance to total variance of the variable y,

(79) ρ2=(aTx)2y2=aTxxTay2=yxTM1MM1yxy2=yxTM1yxy2.

Comparing this to Equation (76), we see that

(80) A11=1y211-ρ2,

showing that the diagonal elements of the precision matrix are monotonically related to the goodness-of-fit parameter ρ2 that indicates how well the corresponding variable can be linearly predicted by all the other variables. In addition, the inverse dependence on the variance of the response y2 shows that variables that do not fluctuate much (low y2) lead to high diagonal values of the precision matrix . From Equation (72), we see that these variances should be considered ‘large’ or ”small’ in comparison with the noise level in each receptor (σa). Since receptor responses that either do not fluctuate much or whose values can be guessed based on the responses of other receptors are not very informative, we should find that receptor abundances Ka are low when the corresponding diagonal elements of the inverse overlap matrix Aaa=Qaa-1 are high.

https://doi.org/10.7554/eLife.39279.022

Data availability

All the code necessary to reproduce our results and the figures from the paper is available on GitHub, at https://github.com/ttesileanu/OlfactoryReceptorDistribution (copy archived at https://github.com/elifesciences-publications/OlfactoryReceptorDistribution). The olfactory receptor affinity data were originally published in Hallem et al. (2006) and Saito et al. (2009), and the olfactory receptor expression levels in mouse were originally published in Ibarra-Soria et al. (2017).

References

    1. Barlow HB
    (1961)
    Sensory Communication
    217–234, Possible principles underlying the transformations of sensory messages, Sensory Communication, MIT Press.
  1. Book
    1. Boyd S
    2. Vandenberghe L
    (2004)
    Convex Optimization
    Cambridge University Press.
  2. Report
    1. FCI
    (2018)
    Federation Cynologique Internationale
    (AISBL).
    1. Gross EA
    2. Swenberg JA
    3. Fields S
    4. Popp JA
    (1982)
    Comparative morphometry of the nasal cavity in rats and mice
    Journal of Anatomy 135:83–88.
  3. Book
    1. Rospars J-P
    2. Chambille I
    (1989)
    Identified Glomeruli in the Antennal Lobes of Insects: In Variance, Sexual Variation and Postembryonic Development
    In: Singh R. N, Strausfeld N. J, editors. Neurobiology of Sensory Systems. Boston, MA: Springer US. pp. 355–375.
  4. Book
    1. Rousseeuw PJ
    2. Leroy AM
    (1987)
    Robust Regression and Outlier Detection
    John Wiley & sons, Inc.
  5. Book
    1. Shannon CE
    (1948)
    A Mathematical Theory of Communication
    University of Illinois Press.
    1. Srinivasan MV
    2. Laughlin SB
    3. Dubs A
    (1982) Predictive coding: a fresh view of inhibition in the retina
    Proceedings of the Royal Society of London. Series B, Biological sciences 216:427–459.
    https://doi.org/10.1098/rspb.1982.0085
    1. Yu CW
    2. Prokop-Prigge KA
    3. Warrenburg LA
    4. Mainland JD
    (2015)
    Drawing the border of olfactory space
    Chemical Senses 40:565.

Article and author information

Author details

  1. Tiberiu Teşileanu

    1. Center for Computational Biology, Flatiron Institute, New York, United States
    2. Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, New York, United States
    3. David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States
    Contribution
    Conceptualization, Software, Formal analysis, Validation, Visualization, Methodology, Writing—original draft, Writing—review and editing
    For correspondence
    ttesileanu@gmail.com
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0003-3107-3088
  2. Simona Cocco

    Laboratoire de Physique Statistique, École Normale Supérieure and CNRS UMR 8550, PSL Research, UPMC Sorbonne Université, Paris, France
    Contribution
    Conceptualization, Formal analysis, Supervision, Methodology, Writing—review and editing
    Competing interests
    No competing interests declared
  3. Rémi Monasson

    Laboratoire de Physique Théorique, École Normale Supérieure and CNRS UMR 8550, PSL Research, UPMC Sorbonne Université, Paris, France
    Contribution
    Conceptualization, Software, Formal analysis, Supervision, Methodology, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-4459-0204
  4. Vijay Balasubramanian

    1. Initiative for the Theoretical Sciences, The Graduate Center, City University of New York, New York, United States
    2. David Rittenhouse Laboratories, University of Pennsylvania, Philadelphia, United States
    Contribution
    Conceptualization, Formal analysis, Supervision, Methodology, Writing—review and editing
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0002-6497-3819

Funding

Simons Foundation (400425)

  • Vijay Balasubramanian

Aspen Center for Physics (PHY-160761)

  • Vijay Balasubramanian

Swartz Foundation

  • Tiberiu Teşileanu

National Science Foundation (PHY-1734030)

  • Tiberiu Teşileanu
  • Vijay Balasubramanian

United States - Israel Binational Science Foundation (2011058)

  • Vijay Balasubramanian

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

We thank Joel Mainland and David Zwicker for helpful discussions, and Elissa Hallem, Joel Mainland, and Darren Logan for olfactory receptor affinity data. This work was supported by a grant from the Simons Foundation/SFARI Mathematical Modeling in Living Systems program (400425, VB). VB was also supported by Aspen Center for Physics NSF grant PHY-160761 and US–Israel Binational Science Foundation grant 2011058. TT was supported by the Swartz Foundation. This work was also supported by NSF grant PHY-1734030 (Center for the Physics of Biological Function).

Version history

  1. Received: July 3, 2018
  2. Accepted: February 13, 2019
  3. Accepted Manuscript published: February 26, 2019 (version 1)
  4. Version of Record published: March 4, 2019 (version 2)

Copyright

© 2019, Teşileanu et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 4,564
    views
  • 430
    downloads
  • 16
    citations

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)

  1. Tiberiu Teşileanu
  2. Simona Cocco
  3. Rémi Monasson
  4. Vijay Balasubramanian
(2019)
Adaptation of olfactory receptor abundances for efficient coding
eLife 8:e39279.
https://doi.org/10.7554/eLife.39279

Share this article

https://doi.org/10.7554/eLife.39279

Further reading

    1. Microbiology and Infectious Disease
    2. Physics of Living Systems
    Fabien Duveau, Céline Cordier ... Pascal Hersen
    Research Article

    Natural environments of living organisms are often dynamic and multifactorial, with multiple parameters fluctuating over time. To better understand how cells respond to dynamically interacting factors, we quantified the effects of dual fluctuations of osmotic stress and glucose deprivation on yeast cells using microfluidics and time-lapse microscopy. Strikingly, we observed that cell proliferation, survival, and signaling depend on the phasing of the two periodic stresses. Cells divided faster, survived longer, and showed decreased transcriptional response when fluctuations of hyperosmotic stress and glucose deprivation occurred in phase than when the two stresses occurred alternatively. Therefore, glucose availability regulates yeast responses to dynamic osmotic stress, showcasing the key role of metabolic fluctuations in cellular responses to dynamic stress. We also found that mutants with impaired osmotic stress response were better adapted to alternating stresses than wild-type cells, showing that genetic mechanisms of adaptation to a persistent stress factor can be detrimental under dynamically interacting conditions.

    1. Physics of Living Systems
    Josep-Maria Armengol-Collado, Livio Nicola Carenza, Luca Giomi
    Research Article Updated

    We formulate a hydrodynamic theory of confluent epithelia: i.e. monolayers of epithelial cells adhering to each other without gaps. Taking advantage of recent progresses toward establishing a general hydrodynamic theory of p-atic liquid crystals, we demonstrate that collectively migrating epithelia feature both nematic (i.e. p = 2) and hexatic (i.e. p = 6) orders, with the former being dominant at large and the latter at small length scales. Such a remarkable multiscale liquid crystal order leaves a distinct signature in the system’s structure factor, which exhibits two different power-law scaling regimes, reflecting both the hexagonal geometry of small cells clusters and the uniaxial structure of the global cellular flow. We support these analytical predictions with two different cell-resolved models of epithelia – i.e. the self-propelled Voronoi model and the multiphase field model – and highlight how momentum dissipation and noise influence the range of fluctuations at small length scales, thereby affecting the degree of cooperativity between cells. Our construction provides a theoretical framework to conceptualize the recent observation of multiscale order in layers of Madin–Darby canine kidney cells and pave the way for further theoretical developments.