Fourier Light Field Microscopy (fLFM) provides a simple and cost-effective method for whole-brain imaging of larval zebrafish during behavior

A. Schematic of the fLFM system. The sample is illuminated with a 470 nm LED through a 20×/1.0-NA imaging objective (Obj). The fluorescence is sent to the imaging path via a dichroic mirror (DM). An Olympus-style f=180mm tube lens (TL) and f=180mm Fourier lens (FL) are used to conjugate the back focal plane of the objective onto a microlens array (MLA). An sCMOS sensor is positioned at the focal plane of the MLA to capture the raw fLFM images.

B. An example raw sensor image. Each lenslet in the 8×8 array forms an image of the sample from a slightly different perspective, allowing for reconstruction of the 3D volume.

C. Experimental measurement of the point spread function (PSF). Left: the x-y (top) and x-z (bottom) profiles of a reconstructed image of a 1 μm fluorescent bead. Right: corresponding cross sections of the PSF (points). A Gaussian profile (lines) was fit to these data to measure the full width at half maximum (FWHM), which was 3.3 μm laterally and 5.4 μm axially.

D. Schematic of our fLFM data processing pipeline (see Methods for detailed description).

E. A maximum intensity projection (MIP) of a conventional LFM (cLFM) larval zebrafish recording resulting in strong grid-like artifacts and low resolution near the native image plane (black arrows). A reconstructed volume of 750 × 750 × 200 μm3 is shown.

F. An MIP of a fLFM recording. fLFM exhibits higher axial resolution and does not contain artifacts at the native image plane. A reconstructed volume of 750 × 375 × 200 μm3 is shown.

G. A heatmap of extracted neuronal activity from an example one hour recording of 15,286 neurons. Neurons are sorted using the rastermap algorithm (Stringer et al., 2019), such that nearby neurons exhibit similar temporal activity patterns. Multiple distinct activity patterns are seen, included strong activity during two sets of drifting grating-induced optomotor response trials (black arrows).

H. Example neurons tuned to tail movements and visual stimuli. In black are six example neuron traces from the designated region in panel G, which exhibited correlations with the GCaMP-kernel-convolved tail vigor (top trace, see Methods for details). In red are six example neuron traces from the designated region in panel G, which exhibited correlations with the visual stimuli presented in the left visual field (denoted by the red lines at the bottom of the plot).

Zebrafish exhibit highly variable motor responses to visual stimuli

A. Schematic of the freely-behaving experimental setup. A single larva is placed in a 90 mm petri dish with a shallow (approximately 3 mm) amount of water. The dish is placed on a screen which displays visual stimuli consisting of dots of various sizes drifting in a constant direction and speed. The behavior is monitored with a camera from above.

B. An example target-avoidance bout. A composite image taken of four frames from start (white arrow) to near the finish (black arrow) of a bout in which the larvae avoided a large visual stimulus. See also Video 1.

C. An example target-directed bout. A composite image taken of three frames from start (white arrow) to finish (black arrow) of a bout in which the larvae made a directed movement toward a small visual stimulus. See also Video 1.

D. The behavioral response of larvae to various size visual stimuli. For each size stimulus, all bouts of movement in which the larva was within 10 body lengths of a visual stimulus were considered. For each bout, the change in distance between the larvae and the stimulus during the movement was monitored. Thus, target-directed bouts exhibited negative values corresponding to a decrease in the distance to the stimulus, whereas target-avoidance bouts exhibited positive values. Stimuli less than 7° in size evoked target-directed responses on average, whereas stimulus greater than 10° evoked target-avoidance responses. Shown is the mean ± standard deviation of n=9 larvae.

E. Schematic of the head-fixed experimental setup. A larva is embedding in agarose in order to keep its head fixed during whole-brain imaging. The agarose around the tail and eyes are removed, such that the tail can be tracked (indicated by black dots along the tail) and visual stimuli can be presented. Visual stimuli of various sizes are presented moving from the center of visual field to either the left or right.

F. The behavioral response of larvae to various size visual stimuli during head fixation. Visual stimuli are presented in either the left (pink) or right (green) visual field. The directedness of tail movements is monitored by computing the mean tail curvature during a bout of movement, with positive values indicating leftward motions. Similar to freely-behaving experiments, visual stimuli of 7° or less evoked target-directed responses, whereas stimuli larger than 10° evoked target-avoidance. Shown is the mean ± standard deviation of n=10 larvae. Asterisks indicate stimuli with significant difference between presentations on the left and right visual field (p<0.05, paired t-test).

G. Example behavioral responses to various stimuli. For each of four stimulus sizes on the left and right visual fields, a histogram of the mean tail curvature during bouts is shown for an example larva. While stimulus-evoked behavioral responses are variable in all cases, they appear the least variable in the case of largely target-avoidance bouts to large 44° stimuli (rightmost column).

Trial-to-trial variability in visually-evoked neurons is largely orthogonal to visual decoding dimensions

A. Individual visually tuned neurons exhibit trial-to-trial variability in their responses to stimuli. For each of the six visual stimuli (three object sizes on both the left and right visual fields), the two neurons exhibiting the highest correlation with the stimulus kernel (see Methods for details) are shown for an example larva. Each column represents a neuron, and each line represents its response to a given stimulus during a single trial.

B. Visual stimuli are reliably decodable from whole-brain dynamics on the single trial level. The visual stimuli were decoded using a logistic regression classifier with lasso regularization (see Methods for details). For each larva, a confusion matrix is computed for the test trials during 6-fold cross-validation. Shown is the average confusion matrix across n=8 larvae which were shown six visual stimuli.

C. Schematic of potential geometric relationships between sensory decoding and neural variability dimensions. In each plot, each dot represents the neural response during a single presentation of stimulus A or B. The decision boundary for an optimal classifier is denoted with a dashed line, and the optimal stimulus decoding direction is denoted by the vector . The direction representing the maximal trial-to-trial variance is denoted by , and can be calculated by finding the first eigenvector of the noise covariance matrix (see Methods for details). These vectors can be: (i) orthogonal, such that neuronal variability does not limit the stimulus decoding; (ii) show little relationship, for example in the case of uniform variability; or (iii) aligned, such that variability likely limits the information encoding capacity along .

D. The trial-to-trial noise correlation matrix appears multi-dimensional. Shown is the average noise correlation matrix across all stimulus types presented. The neurons are sorted using rastermap, which produces a one-dimensional embedding of the neurons, such that neurons which show similar correlation profiles are placed near to one another. A number of neuronal populations exhibiting correlations across trials are apparent from the clusters of high correlations near the diagonal.

E. Trial-to-trial variability in the visually-evoked neurons is largely orthogonal to visual decoding dimensions. The fraction of trial-to-trial variance explained by each noise mode is plotting against the angle between and the optimal stimulus decoding direction . Shown are the for n=8 larvae, colored by their rank order α based on the fraction of variance explained. The largest noise modes were approximately orthogonal (∼90°) to the stimulus decoding direction, whereas only a few of the smallest noise modes exhibited angles less than 90°.

F. Example projections of single trial neural activity along the stimulus decoding and noise dimensions. Each dot represents the average neural activity within a single trial projected along and for an example larva. Each of the six visual stimuli, three object sizes presented on either the right (green) or left (pink) visual field, are robustly encoded along across trials; however, in all stimuli there is strong orthogonal variability along , the largest noise mode representing 39.5% of the trial-to-trial variance.

G. Example neuron coefficients for . Shown are the 366 neurons with the largest weights over a maximum intensity projection of the recorded volume. The visually-evoked neurons which encode stimulus information are concentrated within the optic tectum.

H. Example neuron coefficients for . Shown are the 828 neurons with the largest weights over a maximum intensity projection of the recorded volume. The visually-evoked neurons contributing to the largest noise mode are highly overlapping with the neurons contributing to in panel G.

I. The neural noise modes are highly correlated with tail movements. Shown are both the GCaMP kernel-convolved tail vigor and the neuronal projection onto the first noise mode for a representative larva. Over the full two-hour recording, the tail vigor and noise mode projection exhibit a significant correlation of r=0.58, p<10-6.

J. The largest neural noise modes reflect brain-wide motor encoding. Shown are the correlations between the first noise mode and the tail vigor (blue) or the first principal component (PC 1, orange) of whole-brain data (see Figure S3C). Dots show data from individual larvae, whereas the violin plots below show the null distribution for temporally shuffled data, in which the tail vigor or PC 1 are circularly permuted.

Pre-motor neuronal populations predictive of single-trial behavior.

A. Trials are time-warped to align stimulus and decision onsets before classifying the turn direction. Top: Time-warped tail curvatures for trials in which the fish performed leftward or rightward turns, on the left and right, respectively. Bottom: Trial-averaged and time-warped neuronal timeseries for 15,286 neurons during left and right turns. The neurons are sorted using the rastermap algorithm. A time window is swept across the stimulus and decision timepoints to train binary classification models to predict the turn direction from the associated neuronal dynamics.

B. Stimulus classification accuracy peaks after the onset of visual stimulation. The mean F score across n=7 larvae is used to assess the performance of 6-way multiclass classification of the presented visual stimulus as a function of warped time surrounding the stimulus onset (Stim.) and decision timepoint (Dec.). Shown is the mean ± 95% confidence interval of the F score for the best time window ending at the given timepoint (Data), compared to shuffled data in which the class labels are randomized. The black bar at the bottom indicates timepoints where the data have a significantly higher F score than the shuffled data (p<0.05, paired t-test).

C. Binary classification of responsiveness, whether or not the fish responds in a given trial, is significant throughout all time periods but accuracy peaks near movement initiation. As in panel B, except for binary classification of responsiveness. Nonresponsive trials are time-warped by randomly selecting a reaction time from the response trials and applying the same transformation.

D. (i) Turn direction classification accuracy is significantly higher than shuffled data across the entire time-warped interval, but peaks near movement initiation. As in panel B, except for binary classification of turn direction. (ii) Single trial classification of turn direction across larvae. The mean confusion matrix across n=7 larvae, which show an accuracy of 77 ± 4% (mean ± 95% confidence interval).

E. Single trial trajectories are separated based on responsiveness and turn direction. Shown are neural activity trajectories during single trials in an example larva projected onto the brain-wide neural dimensions that optimally separated turn direction and responsiveness.

F. Consistent trial-averaged trajectories across larvae. As in panel F, except for the trial-averaged responses for n=6 example larvae. For the one bold animal, timepoints across the trial are indicated by a circle for trial start, diamond for the decision timepoint, and an X for the trial end.

G. Real-time single-trial dynamics in an example larva. Along the turn direction neural projection, left and right trials are separated for many seconds before the decision timepoint, which is longer than the three second length of visual presentations. Activity along this dimension shows consistent ramping across trials approximately one second before movement.

H. Example turn direction neuronal ensemble. Shown are the coefficients for all neurons which showed significantly higher (one-tailed t-test, p<0.05) absolute coefficients in the real models compared to shuffled data in which the turn direction labels are randomly permuted.

I. Highly distributed encoding of turn direction across larvae. The percentage of significant turn direction neurons located with the four major brain regions (Tel – telencephalon, Tec – optic tectum, Cer – cerebellum, and Hind – hindbrain) are shown for n=10 larvae. There is no significant difference between the percentage of neurons across brain regions (p>0.05, paired t-test).

J. Example responsiveness neuronal ensemble. As in panel I, except for responsiveness. Shown are the coefficients for all neurons which showed significantly higher (one-tailed t-test, p<0.05) absolute coefficients in the real models compared to shuffled data in which the turn direction labels are randomly permuted.

K. Highly distributed encoding of responsiveness across larvae. As in panel J, except for responsiveness. The percentage of significant turn direction neurons located with the four major brain regions (Tel – telencephalon, Tec – optic tectum, Cer – cerebellum, and Hind – hindbrain) are shown for n=10 larvae. There is no significant difference between the percentage of neurons across brain regions (p>0.05, paired t-test).

Spontaneous turns are predictable from the same pre-motor neuronal population

A. Schematic of the approach to predict spontaneous turns. Models are fit to predict left or right turn direction during visual trials (top), as in Figure 4. They are then tested on pre-motor periods one second before the spontaneous turn.

B. Spontaneous turns are predicted from visual-evoked model. Shown is the relationship between the spontaneous tail curvature and the predicted pre-motor turn direction using the visual-evoked model. Each dot is a single spontaneous turn and each color represents a different larva. They exhibit a significant correlation of r=0.41, p<0.05.

C. Spontaneous turn classification accuracy. The mean cross-validated confusion matrix for spontaneous turn classification over n=5 larvae. Spontaneous turns are predicted with an accuracy of 70.2 ± 6.0% (mean ± 95% CI across n=5 larvae).