Computational techniques developed to predict if odorants will interact with receptors in the olfactory system have achieved a success rate of 70%.
The question of how we and other animals perceive the surrounding world was tackled by Aristotle more than 2300 years ago. Since then we have gained quite a good understanding of visual perception. Humans and most other animals employ a small number of different types of visual receptor, each being most sensitive to light of a specific wavelength and less sensitive to shorter or longer wavelengths (Schnapf et al., 1987). Using different receptor types, with overlapping sensitivity ranges, we can detect light with wavelengths between about 380 nm and 750 nm. Hearing is also well understood: sounds of different wavelengths activate different types of sensory neurons to provide coverage over a range of wavelengths (Masterto et al., 1969). However, the way that we respond to our chemical environment—that is, the way we respond to different smells and tastes—is much more complicated.
Contrary to vision and audition, olfaction has to deal with cues that are not arranged along a linear scale. The nose is exposed to several hundred thousand odorants that differ in chemical structure and in ecological relevance. One might imagine that the nose would need numerous different receptor types—each type sensitive to just a single compound—to detect and discriminate all the relevant odorants. However, as always, evolution found a smarter way. As first discovered by Richard Axel and Linda Buck in 1991—and rewarded with a Nobel Prize in 2004—animals are equipped with a relatively small, species specific, number of olfactory receptors (Buck and Axel, 1991): mice have more than 900 types, humans about 400, and the vinegar fly D. melanogaster has around 60.
Only a few, very important odorants—such as pheromones (Nakagawa et al., 2005) or the odorants given off by rotten food (Stensmyr et al., 2012)—have a one-to-one relationship with specific olfactory receptors. In general, a single receptor can detect a range of different odorants, and a single odorant can target a range of receptors, with a given odorant being identified through the pattern of receptors that it activates (Hallem and Carlson, 2006). It is thought that this so-called combinatorial olfactory code is employed by insects and also by vertebrates (Vosshall, 2000; Kauer and White, 2001). However, many of the details of the interactions between the odorant molecules and the receptors remain mysterious. Why, for example, do odorants with similar structures sometimes target different receptors, whereas other odorants with clearly different structures often target the same receptor.
Now, in eLife, Sean Boyle, Shane McInally and Anandasankar Ray of the University of California at Riverside describe a new method that can predict which odorants interact with which receptors much more accurately than previous methods (Boyle et al., 2013). During the last decade many groups have screened the sensory range of the odorant receptors of the vinegar fly, and a total of 251 different odorants are known to be able to activate at least one receptor. Although this is a tiny number compared with the number of odorants that flies are usually exposed to, Boyle, McInally and Ray were able to gain fresh insights into the receptor-odorant interactions by performing a highly detailed meta-analysis on these 251 odorants to identify the properties that cause an odorant to target a particular receptor (Figure 1). In addition to the ‘usual suspects’ of molecular properties (e.g., whether the odorant is an alcohol, an ester or an aldehyde), they took into account some 3,224 physical and/or chemical properties of the odorants, including obvious properties like molecular weight and three-dimensional structure, and less obvious properties like the ‘eigenvalue sum from electronegativity weighted distance matrix’.
This approach was pioneered by groups at Goethe University in Frankfurt (Schmuker et al., 2007) and the Weizmann Institute (Haddad et al., 2008). However, instead of analysing all the receptors and all the physical and chemical properties, the Riverside researchers used an algorithm that allowed the most critical properties for each receptor to be identified. Next they screened a list of more of 240,000 odorants to find those that they expected to interact with nine different receptors. Finally, they tested these predictions in experiments: Their predictions were correct more than 70% of the time, compared with a success rate of just 10% for odorants chosen at random. Hence, although odorants do not follow any linear rules like light and sound, we can still use their physical and chemical properties to predict whether an odorant interacts with a specific receptor and later, we hope, be able to understand why it interacts.
These results will be of interest beyond a narrow group of specialists. According to the United Nations Food and Agriculture Organization, insects and insect-spread diseases are responsible for an estimated 20–40% of world-wide crop production being lost every year. Furthermore, malaria and dengue fever, which are both spread by mosquitoes, kill more than 1 million people every year (and infect another 250 million). As insects typically use olfactory cues to find new hosts, a better understanding of odorant-receptor interactions promises substantial improvements for human food supply and health.
Downloads (link to download the article as PDF)
Download citations (links to download the citations from this article in formats compatible with various reference manager tools)
Open citations (links to open the citations from this article in various online reference manager services)
Objective and automatic measurement of pain in mice remains a barrier for discovery in neuroscience. Here we capture paw kinematics during pain behavior in mice with high-speed videography and automated paw tracking with machine and deep learning approaches. Our statistical software platform, PAWS (Pain Assessment at Withdrawal Speeds), uses a univariate projection of paw position over time to automatically quantify seven behavioral features that are combined into a single, univariate pain score. Automated paw tracking combined with PAWS reveals a behaviorally-divergent mouse strain that displays hyper-sensitivity to mechanical stimuli. To demonstrate the efficacy of PAWS for detecting spinally- versus centrally-mediated behavioral responses, we chemogenetically activated nociceptive neurons in the amygdala, which further separated the pain-related behavioral features and the resulting pain score. Taken together, this automated pain quantification approach will increase objectivity in collecting rigorous behavioral data, and it is compatible with other neural circuit dissection tools for determining the mouse pain state.
The eyes are never still during maintained gaze fixation. When microsaccades are not occurring, ocular position exhibits continuous slow changes, often referred to as drifts. Unlike microsaccades, drifts remain to be viewed as largely random eye movements. Here we found that ocular position drifts can, instead, be very systematically stimulus-driven, and with very short latencies. We used highly precise eye tracking in three well trained macaque monkeys and found that even fleeting (~8 ms duration) stimulus presentations can robustly trigger transient and stimulus-specific modulations of ocular position drifts, and with only approximately 60 ms latency. Such drift responses are binocular, and they are most effectively elicited with large stimuli of low spatial frequency. Intriguingly, the drift responses exhibit some image pattern selectivity, and they are not explained by convergence responses, pupil constrictions, head movements, or starting eye positions. Ocular position drifts have very rapid access to exogenous visual information.