Correcting for physical distortions in visual stimuli improves reproducibility in zebrafish neuroscience
Abstract
Optical refraction causes light to bend at interfaces between optical media. This phenomenon can significantly distort visual stimuli presented to aquatic animals in water, yet refraction has often been ignored in the design and interpretation of visual neuroscience experiments. Here we provide a computational tool that transforms between projected and received stimuli in order to detect and control these distortions. The tool considers the most commonly encountered interface geometry, and we show that this and other common configurations produce stereotyped distortions. By correcting these distortions, we reduced discrepancies in the literature concerning stimuli that evoke escape behavior, and we expect this tool will help reconcile other confusing aspects of the literature. This tool also aids experimental design, and we illustrate the dangers that uncorrected stimuli pose to receptive field mapping experiments.
Main text
Breakthrough technologies for monitoring and manipulating singleneuron activity provide unprecedented opportunities for wholebrain neuroscience in larval zebrafish (Ahrens et al., 2012; Ahrens et al., 2013; Portugues et al., 2014; Prevedel et al., 2014; Vladimirov et al., 2014; Dunn et al., 2016b; Naumann et al., 2016; Kim et al., 2017; Vladimirov et al., 2018). Understanding the neural mechanisms of visually guided behavior also requires precise stimulus control, but little prior research has accounted for physical distortions that result from refraction and reflection at an airwater interface that usually separates the projected stimulus from the fish (Sajovic and Levinthal, 1983; Stowers et al., 2017; Zhang and Arrenberg, 2019). In a typical zebrafish visual neuroscience experiment, an animal in water gazes at stimuli on a screen separated from the water by a small (~500 µm) region of air (Figure 1a, top). When light traveling from the screen reaches the airwater interface, it is refracted according to Snell’s law (Hecht, 2016; Figure 1b, bottom). At flat interfaces, a common configuration used in the literature (Ahrens et al., 2012; Vladimirov et al., 2014; Dunn et al., 2016a), this refraction reduces incident light angles, thereby translating and distorting the images that reach the fish (black vs. brown arrows in Figure 1a, bottom). By solving Snell’s equations for this arena configuration (Appendix 1), we determined the apparent position of a point on the screen, $\theta $, as a function of its true position, $\theta \text{'}$ (Figure 1b). Snell’s law implies that distant stimuli appear to the fish at the asymptotic value of $\theta \left(\theta \text{'}\right)$ (~48.6°). This implies that the entire horizon is compressed into a 97.2° “Snell window” whose size does not depend on the distances between the fish and the interface (d_{w}) or the screen and the interface (d_{a}), but the distance ratio d_{a}/d_{w} determines the abruptness of the $\theta \left(\theta \text{'}\right)$ transformation. We also calculated the total light transmittance according to the Fresnel equations (Figure 1b, right). These two effects have a profound impact on visual stimuli (Figure 1c). The plastic dish that contains the water has little impact (Appendix 1). Physical distortions thus have the potential to affect fundamental conclusions drawn from studies of visual processing and visuomotor transformations.
The quantitative merits of correcting for refraction are apparent when comparing two recent studies of visually evoked escape behavior in larval zebrafish. Although Temizer et al. (2015) and Dunn et al., 2016a both found that a critical size of looming stimuli triggered escape behavior, they reported surprisingly different values for the critical angular size (21.7°±4.9° and 72.0°±2.5°, respectively, mean ±95% CI). This naively implies that the critical stimulus of Dunn et al. occupied 9 times the solid angle of Temizer et al. (1.02 [+0.14,–0.11] steradians and 0.11 [+0.06,–0.04] steradians, respectively, mean [95% CI]) (Materials and methods). This large size discrepancy initially raises doubt to the notion that a stimulus size threshold triggers the escape (Hatsopoulos et al., 1995; Gabbiani et al., 1999; Fotowat and Gabbiani, 2011). However, a major difference in experimental design is that Temizer et al. showed stimuli from the front through a curved airwater interface, and Dunn et al. showed stimuli from below through a flat airwater interface (Figure 2a). Correcting the Dunn et al. stimuli with Snell’s law, and again quantifying the size of irregularly shaped stimuli with their solid angle, we found that the fish exhibited escape responses when the stimulus spanned just 0.24 steradians (Figure 2b, Materials and methods, Appendix 1, Figure 2—video 1). The same correction applied to Temizer et al. sets the critical size at 0.08 steradians (Figure 2b, Materials and methods, Appendix 2). This leaves a discrepancy of 0.16 steradians, which is much smaller than the original solid angle discrepancy of 0.91 steradians (Figure 2c, black). Correcting with Snell’s law thus markedly reduced this discrepancy in the literature, shrinking a 9fold size difference down to 3fold (Figure 2c, blue). The small remaining difference could indicate an ethologically interesting dependence of behavior on the spatial location of the looming stimulus (Dunn et al., 2016a; Temizer et al., 2015).
Accounting for optical distortions will be critical for understanding other fundamental properties of the zebrafish visual system. For example, a basic property of many visual neurons is that they respond strongest to stimuli presented in one specific region of the visual field, termed their receptive field (RF) (Hartline, 1938; Ringach, 2004; Zhang and Arrenberg, 2019). When we simulated the effect of Snell’s law on RF mapping under typical experimental conditions (Figure 2d), we predicted substantial errors in both the position and size of naively measured receptive fields (Figure 2e, Materials and methods). Depending on the properties of the true RF, its position and size could be either over or underestimated (Figure 2e–f), with the most drastic errors occurring for small RFs appearing near the edge of the Snell window.
Future experiments could avoid distortions altogether by adjusting experimental hardware. For instance, fish could be immobilized in the center of waterfilled spheres (Zhang and Arrenberg, 2019; Dehmelt et al., 2019), or air interfaces could be removed altogether, such as by placing a projection screen inside the waterfilled arena. But in practice the former would restrict naturalistic behavior, and the latter would reduce light diffusion by shrinking the refractive index mismatch between the diffuser and transparent medium (water vs. air) that typical light diffusers use to transmit stimuli over a large range of angles. An engineering solution might build diffusive elements into the body of the fish tank (Stowers et al., 2017; Franke et al., 2019). Alternatively, we propose a simple computational solution to account for expected distortions when designing stimuli or analyzing data. Our tool (https://www.github.com/spoonsso/snell_tool/) converts between normal and distorted image representations for the most common zebrafish experiment configuration (Figure 1a), and other geometries could be analyzed similarly. This tool will therefore improve the interpretability and reproducibility of innovative experiments that capitalize on the unique experimental capabilities available in zebrafish neuroscience.
Materials and methods
See Appendix 1 and Appendix 2 for the geometric consequences of Snell’s law at flat and curved interfaces, respectively.
Implications of the Fresnel equations
Request a detailed protocolOnly a portion of the incident light is transmitted into the water to reach the eye. We calculated the fraction of transmitted light according to the Fresnel equations. Assuming the light is unpolarized,
where $T$ is the fraction of light transmitted across an airwater interface at incident angle ${\mathrm{\psi}}_{a}={\mathrm{\psi}}_{a}\left(\theta \right)$ (See Appendices 1, 2), ${\psi}_{w}=\theta$ is the angle of the refracted light ray in water, and
are the reflectances for spolarized (i.e. perpendicular) and ppolarized (i.e. parallel) light, respectively. When including the plastic dish in our simulations, we modified these equations to separately calculate the transmission fractions across the airplastic and the plasticwater interfaces. We assumed that the full transmission fraction is the product of these two factors, thereby ignoring the possibility of multiple reflections within the plastic.
Illustrating distorted sinusoidal gratings
Request a detailed protocolFor all image simulations in Figure 1c, we neglected the plastic and fixed the total distance between the fish and the virtual screen, ${d}_{a}+{d}_{w}$, to be 1 cm, a typical distance in realworld experiments. The virtual screen was considered to be a 4 × 4 cm square with 250 pixels/cm resolution. Here we assumed that the virtual screen emits light uniformly at all angles, but this assumption is violated by certain displays, and our computational tool allows the user to specify alternate angular emission profiles. To transform images on the virtual screen, we shifted each light ray (i.e. image pixel) according to Snell’s law, scaled its intensity according to the Fresnel equations, and added the intensity value to a bin at the resulting apparent position. This simple model treats the fish eye as a pinhole detector, whereas real photoreceptors blur visual signals on a spatial scale determined by their receptive field. Consequently, our simulation compresses a large amount of light onto the overly thin border of the Snell window, and we saturated the grayscale color axes in Figure 1c to avoid this visually distracting artifact.
To make the image as realistic as possible, we mimicked real projector conditions using gammaencoded gratings with spatial frequency 1 cycle / cm, such that
with $x\left(t\right)$ ranging from 1.0 to 500.0 lux, a standard range of physical illuminance for a lab projector. The exponent on the left represents a typical display gamma encoding with gamma = 2.2. To reduce moiré artifacts arising from ray tracing, we used a combination of ray supersampling (averaging the rays emanating from 16 subpixels for each virtual screen pixel) and stochastic sampling (the position of each ray was randomly jittered between 1 and 1 subpixels from its native position) (Dippé and Wold, 1985). In Figure 1c, we display the result of these operations followed by a gamma compression to mimic the perceptual encoding of the presented stimulus.
Corrections to looming visual stimuli
Request a detailed protocolWe approximated the geometric parameters from Dunn et al. (2016a) (flat airwater interface, d_{a} = 0.5 mm, d_{w} = 3 mm, d_{p} = 1 mm, stimulus offset from the fish by 10 mm along the screen) and Temizer et al., 2015 (curved airwater interface, d_{a} = 8 mm, d_{w} = 2 mm, d_{p} = 1 mm, r = 17.5 mm, stimulus centered) to create Snelltransformed images of circular stimuli with sizes growing over time (Figure 2a–c). We used a refractive index of n_{p} = 1.55 for the polystyrene plastic. While Dunn et al. collected data from freely swimming fish, the height of the water was kept at approximately 5 mm, and 3 mm reflects a typical swim depth. Since freely swimming zebrafish can adjust their depth in water, it’s an approximation to treat d_{w} as constant.
We quantified the size of each transformed stimulus with its solid angle, the surface area of the stimulus shape projected onto the unit sphere. To calculate the solid angle for Temizer et al., we used the formula for a spherical cap, $A=2\pi (1\mathrm{cos}\theta )$, where $A$ is the solid angle and $2\theta$ is the apex angle. To calculate the solid angle for Dunn et al., in which stimuli were not spherical caps, we first represented stimulus border pixels in a spherical coordinate system locating the fish at the origin. The radial coordinate does not affect the solid angle, so we described each border pixel by two angles: the latitude, α, and longitude, $\beta $. To calculate the area, we used an equalarea sinusoidal (Mercator) projection given by
which projects an arbitrary shape on the surface of a sphere onto the Cartesian plane. While distances and shapes are not preserved in this projection, area as a fraction of the sphere’s surface area is maintained. Thus, we could calculate the solid area of the stimulus in this projection by finding the area of the projected 2D polygon. To calculate the absolute and relative discrepancy 95% confidence intervals in Figure 2c, we used error propagation formulae for the difference and division of two distributions, respectively.
Receptive field mapping
Request a detailed protocolWe simulated receptive field (RF) mapping experiments by tracing light paths from single pixels on a virtual screen to the fish (Figure 2df). We modeled a neuron’s RF as a Gaussian function on the sphere, defined the “true RF” to be the pixelwise response pattern that would occur in the absence of the airwater interface, and defined the “apparent RF” as the pixelwise response pattern that would be induced with light that bends according to Snell’s law at an airwater interface. More precisely, we modeled the neural response to pixel activation at position $x$ as
where $T\left({\psi}_{a}\left(x\right)\right)$ is the fraction of light transmitted (Fresnel equations), ${\mu}_{RF}$ and ${\mathrm{\sigma}}_{RF}$ are the mean and standard deviation of the Gaussian RF, $\rho (x,{\mu}_{RF})$ is the distance along a great circle from the center of the RF to the pixel’s projected retinal location, and $P\left(\rho ,{\sigma}_{RF}^{2}\right)={e}^{{\rho}^{2}/\left(2{\sigma}_{RF}^{2}\right)}$ is the Gaussian RF shape. We calculated the great circle distance between points on the sphere as
where $\left({\mathrm{\alpha}}_{RF},{\beta}_{RF}\right)$ are the latitude and longitude coordinate of the RF center, and $\left({\mathrm{\alpha}}_{x},{\beta}_{x}\right)$ are the latitude and longitude coordinates of the projected pixel location. We quantified the position of the RF as the maximum of $F\left(x\right)$, converted to an angular coordinate along the screen. We quantified RF area as the solid angle of the shape formed by thresholding $F\left(x\right)$ at half its maximal value.
Computational tool for simulating and correcting optical distortions
Request a detailed protocolWith this paper, we provide a computation tool for visualizing and correcting distortions (https://github.com/spoonsso/snell_tool/). The tool is written in Python and uses standard image processing libraries. The tool can be launched virtually over the web, without any need to install new software, using the MyBinder link in the README file hosted on the github repository. The source code can also be downloaded and run on the user’s local machine.
The uses and parameters of the tool are described in detail in an example notebook in the repository (snell_example.ipynb). In brief, the tool is implemented only for flat interfaces with the assumptions described in Appendix 1, and it can model distortions through three media (i.e. with a plastic interface between air and water). It can also model displays that emit light with nonuniform angular profiles. Key customizable parameters include the screen size, screen resolution, screen distance, media thicknesses, media refractive indices, and gamma encoding. As described in Illustrating distorted sinusoidal gratings, the tool uses a combination of ray supersampling and stochastic sampling to reduce moiré artifacts arising from ray tracing.
The Python notebook illustrates two primary use cases of the tool, though the tool’s library is flexible enough to be adopted for other tasks. First, it allows the user to input an image to see its distorted form under the assumptions of the model. Thus, it recreates Figure 1c, but for any arbitrary grayscale stimulus, and for a range of userspecified experimental configurations. Second, it allows the user to input an undistorted target image, and the tool inverts the distortion process to suggest an image that could be displayed during an experiment to approximately produce the target from the point of view of the fish. In the tool’s example notebook, we demonstrate this inversion process using a checkered ball stimulus. Importantly, note that some stimuli will be physically impossible to correct (e.g. undistorted image content cannot be delivered outside the Snell window).
Appendix 1
Implications of Snell’s law at a flat interface
For this and all subsequent analyses, we treat the fish as a pinhole detector. Here we derive ${\theta}^{\mathrm{\prime}}\left(\theta \right)$ with the aid of Appendix 1—figure 1. Note that this derivation includes optical effects from the plastic dish, but these effects will be relatively minor. To begin, we summarize the basic trigonometry of the problem. The true angular position of the stimulus is given by
where ${d}_{w}$ is the normal distance between the fish and the waterplastic interface, ${d}_{p}$ is the normal distance between the waterplastic and plasticair interfaces, ${d}_{a}$ is the normal distance between the air interface and the screen (interface and screen assumed to be parallel), ${d\text{'}}_{w}$ is the parallel distance traveled by the light ray in the water, ${d\text{'}}_{p}$ is the parallel distance traveled by the light ray in the plastic, and ${d\text{'}}_{a}$ is the parallel distance traveled by the light ray in air. Each parallel distance is related to the corresponding normal distance by simple trigonometry. The apparent angular location of the stimulus satisfies
and the incident light angle satisfies
thereby leading to
We can next use Snell’s law to reduce the number of angular variables. In particular,
and
together imply that
The role of plastic in this equation is typically minimal. To see this, first note that ${n}_{w}\approx 1.333<{n}_{p}\approx 1.55$, which implies that $\frac{{n}_{w}\mathrm{sin}\theta}{{n}_{p}}<1$. This implies that the Snell window is determined by $1=\frac{{n}_{w}\mathrm{sin}\theta}{{n}_{a}}$, and the properties of the highindex plastic dishes have no effect on the size of the Snell window. The plastic can cause distortions within the Snell window, but these effects were small for all experimental arenas analyzed in this paper, as we empirically found that none of our results qualitatively depended upon the plastic. We therefore chose to highlight the critical impact of the airwater interface by assuming that ${d}_{p}=0$ in the main text’s conceptual discussion. We nevertheless included nonzero values of ${d}_{p}$ in our computational tool so that users can account for the quantitative effects of the plastic dish. We also included the effects of plastic when quantitatively correcting previously published results. Because analytically inverting ${\theta}^{\mathrm{\prime}}\left(\theta \right)$ is nontrivial, we noted from the graph of ${\theta}^{\mathrm{\prime}}\left(\theta \right)$ that the inverse function exists and calculated $\theta \left(\theta \text{'}\right)$ with a numerical lookup table (e.g. Figure 1b).
Appendix 2
Implications of Snell’s law at a curved interface
When the fish is mounted offcenter (Appendix 2—figure 1a) in a circular dish (brown dot), rays pass through a curved interface and are refracted at tangent lines (brown line). We begin by using Snell’s law and basic trigonometry to relate each refraction angle to $\theta $. Let ${d}_{a}$ denote the distance in air between the edge of the plastic dish and the screen, ${d}_{p}$ denote the thickness of the plastic dish, ${d}_{w}$ denote the distance in water between the fish and the edge of the tank nearest the screen, and $r$ denote the radius of the dish (excluding the plastic). We assume that ${d}_{w}\le r$ and the screen is perpendicular to the line between the fish and the center of the dish. Cases where the fish is behind the dish’s center or the screen is angled can be analyzed similarly. Starting at the fish and moving outwards, we first apply the law of sines to the gray triangle to find
where we’ve used the identity $\mathrm{sin}\left(\pi x\right)=\mathrm{sin}\left(x\right)$. It will be useful for later to note that this triangle also implies that $\gamma =\pi \left({\psi}_{w}+\pi \theta \right)=\theta {\psi}_{w}$. Snell’s law at the plasticwater interface implies,
We next relate the two plastic refraction angles to each other by applying the law of sines to the orange triangle and find
Finally, we determine the dependence of ${\mathrm{\psi}}_{a}$ on $\theta $ from Snell’s law applied to the airplastic interface,
With these formulae in hand, we now proceed to the main goal of deriving an expression for ${\theta}^{\prime}\left(\theta \right)$. Since we’ve already extracted everything from Snell’s law, all that remains is basic trigonometry, which we illustrate in Appendix 2—figure 1b. First note that applying the definition of the tangent function to the blue triangle implies that
It thus suffices to determine expressions for $s\left(\theta \right)$ and $s\text{'}\left(\theta \right)$. Consider first $s\text{'}\left(\theta \right)$. The large red triangle implies
Rewriting $\alpha $ in terms of the other two angles in the $\alpha \beta {\psi}_{p}^{\text{'}}$ triangle gives $\alpha =\pi \beta {\psi}_{p}^{\text{'}}$. Rewriting $\beta $ in terms of the other two angles in the $\beta {\gamma \psi}_{p}^{}$ triangle gives $\beta =\pi \left(\gamma +{\psi}_{p}\right)=\pi \theta +{\psi}_{w}{\psi}_{p}$. Putting these pieces together, we thus find
Next consider $s\left(\theta \right)$. Applying the law of sines to the green triangle, we find
Rewriting $\omega $ in terms of the other two angles in the green triangle, $\omega =\pi \left({\psi}_{a}+\pi \phi \right)=\phi {\psi}_{a}$, and rewriting $\phi $ in terms of the other angles in the red triangle,$\phi =\pi \left(\alpha +\frac{\pi}{2}\right)=\frac{\pi}{2}\alpha $, we find
Finally, we find the dependence of $a$ on $\theta $ from the red triangle using the definition of the cosine function
Since we’ve written $\alpha $, $a$, $\omega $, and the refraction angles as functions of $\theta $, we’ve fully specified $s\left(\theta \right)$, $s\text{'}\left(\theta \right)$, and thus ${\theta}^{\text{'}}\left(\theta \right)$. As with the flat interface, we calculated $\theta \left(\theta \text{'}\right)$ using a numerical lookup table.
Data availability
No data were collected for this theoretical manuscript.
References

Antialiasing through stochastic samplingACM SIGGRAPH Computer Graphics 19:69–78.https://doi.org/10.1145/325165.325182

Collision detection as a model for sensorymotor integrationAnnual Review of Neuroscience 34:1–19.https://doi.org/10.1146/annurevneuro061010113632

Computation of object approach by a widefield, motionsensitive neuronThe Journal of Neuroscience 19:1122–1141.https://doi.org/10.1523/JNEUROSCI.190301122.1999

The response of single optic nerve fibersThe American Journal of Physiology 121:400–415.

Mapping receptive fields in primary visual cortexThe Journal of Physiology 558:717–728.https://doi.org/10.1113/jphysiol.2004.065771

Virtual reality for freely moving animalsNature Methods 14:995–1002.https://doi.org/10.1038/nmeth.4399

A visual pathway for loomingevoked escape in larval zebrafishCurrent Biology 25:1823–1834.https://doi.org/10.1016/j.cub.2015.06.002

Lightsheet functional imaging in fictively behaving zebrafishNature Methods 11:883–884.https://doi.org/10.1038/nmeth.3040
Article and author information
Author details
Funding
Duke Forge
 Timothy W Dunn
Duke AI Health
 Timothy W Dunn
Howard Hughes Medical Institute
 James E Fitzgerald
National Institutes of Health (U01 NS090449)
 Timothy W Dunn
 James E Fitzgerald
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Acknowledgements
We thank Damon Clark, Ruben Portugues, and Kristen Severi for helpful comments on the manuscript. We thank Eva Naumann for discussions regarding light diffusion in the laboratory and for sharing fish icons for the figures. We also thank Florian Engert and Haim Sompolinsky for early support and partial funding of the project (NIH grant U01 NS090449). TWD was supported by Duke Forge and Duke AI Health. JEF was supported by the Howard Hughes Medical Institute.
Copyright
© 2020, Dunn and Fitzgerald
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics

 1,194
 views

 184
 downloads

 7
 citations
Views, downloads and citations are aggregated across all versions of this paper published by eLife.
Download links
Downloads (link to download the article as PDF)
Open citations (links to open the citations from this article in various online reference manager services)
Cite this article (links to download the citations from this article in formats compatible with various reference manager tools)
Further reading

 Neuroscience
The retina transforms patterns of light into visual feature representations supporting behaviour. These representations are distributed across various types of retinal ganglion cells (RGCs), whose spatial and temporal tuning properties have been studied extensively in many model organisms, including the mouse. However, it has been difficult to link the potentially nonlinear retinal transformations of natural visual inputs to specific ethological purposes. Here, we discover a nonlinear selectivity to chromatic contrast in an RGC type that allows the detection of changes in visual context. We trained a convolutional neural network (CNN) model on largescale functional recordings of RGC responses to natural mouse movies, and then used this model to search in silico for stimuli that maximally excite distinct types of RGCs. This procedure predicted centre colour opponency in transient suppressedbycontrast (tSbC) RGCs, a cell type whose function is being debated. We confirmed experimentally that these cells indeed responded very selectively to GreenOFF, UVON contrasts. This type of chromatic contrast was characteristic of transitions from ground to sky in the visual scene, as might be elicited by head or eye movements across the horizon. Because tSbC cells performed best among all RGC types at reliably detecting these transitions, we suggest a role for this RGC type in providing contextual information (i.e. sky or ground) necessary for the selection of appropriate behavioural responses to other stimuli, such as looming objects. Our work showcases how a combination of experiments with natural stimuli and computational modelling allows discovering novel types of stimulus selectivity and identifying their potential ethological relevance.

 Neuroscience
γSecretase plays a pivotal role in the central nervous system. Our recent development of genetically encoded Förster resonance energy transfer (FRET)based biosensors has enabled the spatiotemporal recording of γsecretase activity on a cellbycell basis in live neurons in culture. Nevertheless, how γsecretase activity is regulated in vivo remains unclear. Here, we employ the nearinfrared (NIR) C99 720–670 biosensor and NIR confocal microscopy to quantitatively record γsecretase activity in individual neurons in living mouse brains. Intriguingly, we uncovered that γsecretase activity may influence the activity of γsecretase in neighboring neurons, suggesting a potential ‘cell nonautonomous’ regulation of γsecretase in mouse brains. Given that γsecretase plays critical roles in important biological events and various diseases, our new assay in vivo would become a new platform that enables dissecting the essential roles of γsecretase in normal health and diseases.