Abstract
Summary
We present the implementation and efficacy of an open-source closed-loop neurofeedback (CLNF) and closed-loop movement feedback (CLMF) system. In CLNF, we measure mm-scale cortical mesoscale activity with GCaMP6s and provide graded auditory feedback (within ∼50 ms) based on changes in dorsal-cortical activation within regions of interest (ROI) and with a specified rule. Single or dual ROIs (ROI1, ROI2) on the dorsal cortical map were selected as targets. Both motor and sensory regions supported closed-loop training in male and female mice. Mice modulated activity in rule-specific target cortical ROIs to get increasing rewards over days (RM ANOVA p=2.83e-5) and adapted to changes in ROI rules (RM ANOVA p=8.3e-10, Table 4 for different rule changes). In CLMF, feedback was based on tracking a specified body movement, and rewards were generated when the behavior reached a threshold. For movement training, the group that received graded auditory feedback performed significantly better (RM-ANOVA p=9.6e-7) than a control group (RM-ANOVA p=0.49) within four training days. Additionally, mice can learn a change in task rule from left forelimb to right forelimb within a day, after a brief performance drop on day 5. Offline analysis of neural data and behavioral tracking revealed changes in the overall distribution of ΔF/F0 values in CLNF and body-part speed values in CLMF experiments. Increased CLMF performance was accompanied by a decrease in task latency and cortical ΔF/F0 amplitude during the task, indicating lower cortical activation as the task gets more familiar.
Introduction
Most investigations study brain activity and behavior as separate channels that do not interact in real time. Assessments are typically made post hoc, and experimental contingencies are not dependent on regional brain activity fluctuations. In contrast, closed-loop brain stimulation/manipulation requires a continuous dialogue between the brain, subject, and goal-directed outcome (1). The concept of closed-loop feedback has been around for nearly 50 years, with the pioneering work of (2). Surprisingly, the application of closed-loop methods has been relatively limited in the rodent literature (1,3–11). Some of these reasons are rooted in technical (high-dimensional data, real-time, low latency) and logistical challenges (compactness, portability, device interface), while others are related to the inherent complexity of the behavior and neurophysiology of rodents (undesired movements, neural and behavioral metrics for evaluation). An even smaller subset of papers employ modern genetically encoded sensor mesoscopic imaging and closed-loop manipulation as we do (1,3,4). Animal models offer the ability to optimize how and where brain activity is monitored and by what mechanism to best produce the closed-loop feedback.
We present the implementation and efficacy of a closed-loop feedback system in head-fixed mice (Figure 1), employing two types of graded feedback: 1) Closed-loop neurofeedback (CLNF), where feedback is derived from neuronal activity, and 2) Closed-loop movement feedback (CLMF), where feedback is based on observed body movement. These approaches provide a foundational understanding of the potential of closed-loop feedback systems. We term this new Python-based platform for Closed-Loop Feedback Training System CLoPy and provide all software, hardware schematics, and protocols to adapt it to various experimental scenarios.
Results
Current work and previous studies have shown that mice can achieve volitional control of brain activity aided by the representation of brain activity as an external variable through feedback (1, 3, 12, 13). Mice can learn these tasks robustly, and interestingly, can adapt to changes in the task rules. While brain activity can be controlled through feedback, other variables such as movements have been less studied, in part because their analysis in real time is more challenging. Our goal has been to deliver a robust, cross-platform, and cost-effective platform for closed-loop feedback experiments. We have designed and tested a behavioral paradigm where head-fixed mice learn an association between their cortical or behavioral activity, external feedback, and rewards. We tested our CLNF system on Raspberry Pi for its compactness, general-purpose input/output (GPIO) programmability, and wide community support, while the CLMF system was tested on an Nvidia Jetson GPU device. While our investigations center around mesoscale cortical imaging of genetically encoded calcium sensors or behavior imaging, the approach could be adapted to any video or microscopy-dependent signal where relative changes in brightness or keypoint behavior are observed and feedback is given based on a specific rule. This system benefits from advances in pose estimation (14, 15) and employs strategies to improve real-time processing of pre-defined keypoints on compact computers such as the Nvidia Jetson. We have constructed a software and hardware-based platform built around open-source components.
To examine the performance of the closed-loop system, we used water-restricted adult transgenic mice that expressed GCaMP6s widely in the cortex (details in the methods section and in Table 4 and Table 5). Using the transcranial window imaging technique, we assessed the ability of these animals to control brain regions of interest and obtain water rewards. After an initial period of habituation to manual head fixation, mice were switched to closed-loop task training. In training, we had two primary groups: 1) a single cortical ROI linked to water reward and the subject of auditory feedback (Animation 1A, 1B, 1C), or 2) a pairing between two cortical ROIs where auditory feedback and water rewards were given based on the difference in activity between sites (Animation 1D, 1E, 1F). Our results indicated that both training paradigms were able to lead mice to obtain a significantly larger number of rewards over time.
For neurofeedback-CLNF, we calculate a ΔF/F0 value (GCAMP signal) which represents a relative change of intensity in the region(s) of interest. These calculations are made in near real-time on the Raspberry Pi and are used to control the GPIO output pins that provide digital signals for closed-loop feedback and water rewards to water-restricted mice. In 1-ROI experiments, we mapped the range of average ΔF/F0 values in the ROI (Figure 2) to a range of audio frequencies (1 kHz - 22 kHz), which acted as feedback to the animal. For 2-ROI experiments, the magnitude of the ΔF/F0 activity difference between the ROIs, based on the specified rule (e.g., ROI1-ROI2), was mapped to the range of audio frequencies. We confirmed that these frequencies were accurately generated and mapped by audio recordings (supplementary figure 2 and see Methods) obtained at 200 kHz using an ultrasonic microphone (Dodotronic, Ultramic UM200K) positioned within the recording chamber ∼10 cm from the audio speaker. A previous version of the CLNF system was found to have non-linear audio generation above 10 kHz, partly due to problems in the audio generation library and partly due to the consumer-grade speaker hardware we were employing. This was fixed by switching to the Audiostream (https://github.com/kivy/audiostream) library for audio generation and testing the speakers to make sure they could output the commanded frequencies (supplementary figure 2). To confirm the timing of feedback latency, LED lights were triggered instead of water rewards (supplementary Figure 1), and the delay was calculated between the detected event (green LED ON for CLNF and paw movement for CLMF) and the red LED flash. In the case of CLNF, the camera recording brain activity was used to record both the flashing green and red LEDs. Temporal traces of green and red LED pixels were extracted from the recorded video, and the average delay between the green and red LEDs becoming bright was calculated as the delay in closed-loop feedback for CLNF experiments. Performing this analysis indicated that the Raspberry Pi system could provide reliable graded feedback within ∼63 ± 15 ms for CLNF experiments.
For the CLMF experiments, an Omron Sentech STC-MCCM401U3V USB3 Vision camera, connected to the Nvidia Jetson via its Python software developer kit (SDK), was used to calculate the feedback delay. The incoming stream of frames was processed in real-time using a custom deep-neural-network model that was trained using DeepLabCut (16) and DeepLabCut-Live (15), designed to track previously defined points on the mouse. The model was incrementally improved by fine-tuning and re-training 26 times, using 2080 manually labeled frames spanning 52 videos of 10 mice. The pre-trained model is available for anyone to use and fine-tune to adapt for similar platforms. The model was integrated into our CLMF program and deployed on an Nvidia Jetson device for real-time inference of tracked points. A Python-based virtual environment using Conda was created to install all software dependencies. The coordinates of the tracked points for each frame were appended to a list, forming a temporal sequence referred to as “tracks.” Upon offline analysis, these real-time tracks were found to be both accurate and stable throughout the duration of the videos.
To calculate the feedback delay for movements, a red LED was placed within the behavior camera’s field of view. Whenever a threshold-crossing movement was detected in the real-time tracking, the system triggered the LED to turn on. Temporal traces of the tracked left forelimb and the pixel brightness of the red LED were then extracted from the recorded video. By comparing these traces, the average delay between the detected movements and the LED illumination was calculated to be 67±20 ms, which represents the delay in the closed-loop feedback for the CLMF experiments.
For CLMF, mice were headfixed in a specially designed head-fixing chamber (Figure 2Biii, design guide and 3D model in methods) to achieve multiview behavioral recording from a single camera using mirrors (see methods section). In brief, mice are headfixed in a transparent rectangular tunnel (top view) with a mirror at the front (front view) and at the bottom (bottom view) that allows multiple views of the mouse body. The body parts that we track are: snout-top (snout in the top view), tail-top (base of the tail in the top view), snout-front (snout in the front view), FLL-front (left forelimb in the front view), FLR-front (right forelimb in the front view), snout-bottom (snout in the bottom view), FLL-bottom (left forelimb in the bottom view), FLR-bottom (right forelimb in the bottom view), HLL-bottom (left hindlimb in the bottom view), HLR-bottom (right hindlimb in the bottom view), tail-bottom (base of the tail in the bottom view) (Figure 2B). The rationale for selecting these body parts in a particular view was that they needed to be visible at all times to avoid misclassification in real-time tracking. By combining the tracks of a body part in different views, we can form a 3D track of the body part. In a 3D coordinate space having X, Y, and Z axes, tracked points (xb, yb) in the bottom view were treated as being in the 3D X-Y plane, and tracked points (xf, yf) in the front view were treated as being in the 3D Y-Z plane. Thus, X = xb, Y = yb, Z = yf formed tracked points in 3D for a given body part that was tracked in multiple views.
For example, FLL-front and FLL-bottom were tracking the left forelimb in front and bottom views, and by combining the tracks of these two points, we obtained a 3D track of the left forelimb (Figure 3E). Although these 3D tracks are available in real-time, for our CLMF experiments, we used 2D tracks (xb, yb) for behavioral feedback. Audio output channels and GPIO pins on the Nvidia Jetson were used for audio feedback and reward delivery, respectively. Tracks of each body part were used to calculate the speed of those points in real-time, and a selected body part (also referred to as a control point) was mapped to a function generating proportional audio frequency (same as in CLNF, details in the method section). In the software we have developed, one can also choose to calculate acceleration, radial distance, angular velocity, etc., from these tracks and map it to the function generating varying audio frequency feedback. For our work, a range of spontaneous speeds was calculated from a baseline recording before the start of the training. The threshold speed to receive a reward was also calculated from the baseline recording and set at a value that would have yielded a reward rate of one reward per two-minute period (translates to a basal performance of 25% in our trial task structure). This thresholding step was done to allow the mice to discover task rules and keep them motivated.
Both CLNF and CLMF experiments shared a similar experimental protocol (surgery, habituation, then several days of training, Figure 3A). A daily session starts with a 30-sec rest period (no rewards or task feedback) followed by 60 trials (maximum 30 sec each) with a 10-serest c between trials in CLNF, and a minimum of 10 sec rest or until tracked points are stable for 5 sec in CLMF (Figure 3B, 3D). After the habituation period, a spontaneous session of 30 minutes was recorded where mice were not given any feedback or rewards.
The spontaneous session was used to establish baseline levels of GCaMP activity (in target ROI(s) for CLNF experiments) or speed of a target body part for CLMF experiments. This was done to calculate the animal-specific threshold for future training sessions. A success in the trial resulted in a water drop reward that was delivered 1 sec after the end of the trial, and a failed trial ended with a buzzer vibrator 1 sec after the end of the trial (Figure 3B, 3D).
Mice can explore and learn an arbitrary task, rule, and target conditions
CLNF training (Figure 2A) required real-time image processing, feedback, and reward generation. Feedback was a graded auditory tone mapped to the relative changes in selected cortical ROI(s) or movement speed of a tracked body part. Training was conducted using multiple sets of cortical ROI(s) on both male and female mice (see Table 4), wherein the task was to increase the activity in the selected ROI(s) according to the rule (also referred to as ‘the rule’ in the future) to above a predetermined threshold (Figure 5A, 5B). Fluorescence activity changes (ΔF/F0) were calculated using a running baseline of 5 seconds, with single or dual ROIs on the dorsal cortical map selected as targets. In general, all ROIs assessed that encompassed sensory, pre-motor, and motor areas were capable of supporting increased reward rates over time (Figure 4A, Animation 1). A ΔF/F0 threshold value was calculated from a baseline session on day 0 that would have allowed 25% performance. Starting from this basal performance of around 25% on day 1, mice (CLNF No-rule-change, n=28 and CLNF Rule-change, n=13) were able to discover the task rule and perform above 80% over ten days of training (Figure 4A, RM ANOVA p=2.83e-5), and Rule-change mice even learned a change in ROIs or rule reversal (Figure 4A, RM ANOVA p=8.3e-10, Table 4 for different rule changes). There were no persistent significant differences between male and female mice (Supplementary Figure 3A). To investigate whether certain ROI(s) were better than others in terms of performance, we performed linear regression of the success rate over the days and, based on the slope of the fitted line, discovered ROI rules that yielded statistically different progression (fast and relatively slower) of success rate from the mean slope of all ROIs (Supplementary Figure 3C). We visualized these significantly different ROI-rule-based success rates and segregated them into fast and slow based on mean slope (>=0.095 slope was designated fast, else slow) of the progression (Supplementary Figure 3D). Our analysis revealed that certain ROI rules (see description in methods) lead to a greater increase in success rate over time than others (Supplementary Figure 3D).
In CLMF training, the real-time speed of a selected (also referred to as control point (CP) in terms of tracked points) point was mapped to the graded audio tone generator function. We trained the mice with the FLL bottom as the CP for auditory feedback and reward generation (example trial in Animation 2). Mice reached 80% performance (Figure 4B, CLMF Rule-change, No-rule-change) in the task within four days of training (RM ANOVA, p = 8.03e-7). They were eliciting the target behavior (i.e., moving CP at high speed) more frequently on later days compared to the first day of training (Figure 4F), and the correlations of CP speed profiles became more pronounced during the trial periods as compared to the rest period (Figure 6C).
Mice can rapidly adapt to changes in the task rule
We have assessed how mice respond to changes in closed-loop feedback rules. CLNF Rule-change mice went through a change in regulated cortical ROI(s) (Figure 5A, 5B) after being trained on an initial set of ROI(s) (Table 4, Rule-change). The new ROI(s) were chosen from a list of candidate ROI(s) for which the cortical activations were still low (see “Determining ROI(s) for change in CLNF task rule” under Methods). Reward threshold values were then recalculated for the ROI(s). A full list of ROI(s) we tried in separate mice is listed in Table 4. Mouse performance degraded on the first day of the rule switch to the new ROIs (ANOVA, p=8.7e-9), compared to the previous day (Figure 4A), and quickly recovered within five days (RM ANOVA p=8.3e-10) of further training. All new cortical ROI(s) appeared to support recovery to the pre-perturbation success rate (Table 4), and data were pooled.
Similarly, in the CLMF experiments, the target body part was changed from FLL to FLR for Rule-change mice on day 5 and was associated with a significant drop in their success rate from 75-80% to ∼40% (ANOVA p=0.008, Figure 4B). Surprisingly, mice quickly adapted to the rule change and started performing above 70% within a day on day 6 (example trial in Animation 3). Looking closely at their reward rate on day 5 (day of rule change), they had a higher reward rate in the second half of the session as compared to the first half, indicating they were adapting to the rule change within one session. As the mice were learning the task (increasing performance), task latency decreased (Figure 4C). Task latency in this context is the time taken by mice to perform the task within a trial period of 30 s. We included all trials, both successful and unsuccessful, in our calculation. Given that the maximum trial duration is 30 s, the longest possible task latency was capped at 30 s. Following the trend in task performance, the task latency started decreasing during the first four days but increased on day 5 (rule change) and started to drop again afterward.
We also examined the average paw speeds and distributions during trial periods. It is worthwhile noting that for the rule-change group, left forelimb speeds were higher than right forelimb speeds from day 1 to day 4 (when task rule was to move left forelimb). When the rule was switched from left forelimb to right forelimb on day 5, left forelimb speeds dropped below the right forelimb speeds (Figure 4D).
Graded feedback helps in task exploration during learning, but not after learning
To investigate the role of audio feedback in our task, we also had a group with no task-related graded audio feedback (CLMF No-feedback) and instead received audio with a constant frequency (1 kHz) throughout the trials. CLMF No-rule-change mice who received continuous graded auditory feedback significantly improved their task performance (CLMF No-rule-change RM-ANOVA p = 9.6e-7) and outperformed the CLMF No-feedback mice (Figure 4B) very early (No-feedback RM-ANOVA p = 0.49), indicating the positive role of graded feedback for task exploration and learning the association. When graded audio feedback was removed for CLMF Rule-change mice on day 10, it did not affect their task performance, indicating the feedback was not essential to keep performing the task that they had already learned.
Cortical responses became focal and more correlated as mice learned the rewarded behavior in CLMF
There are reports of cortical plasticity during motor learning tasks, both at cellular and mesoscopic scales (17–19), supporting the idea that neural efficiency could improve with learning. As mice become proficient in a task, their brain activity becomes more focused and less widespread, reflecting more efficient neural processing (Figure 7). We noticed the peak activity in different regions of the cortex around the rewarding movement decreased ΔF/F0 in amplitude over days (Figure 7A, 7B, 7C, 7E). To quantify this, we measured the peak ΔF/F0 (ΔF/F0peak) value in the time window from -1s to +1s relative to the body part speed threshold crossing event in each cortical region. Consistent with our visual observations, ΔF/F0peak in several cortical regions gradually decreased over sessions, including the olfactory bulb, sensory forelimb, and primary visual cortex (Figure 7A, 7B, 7C). Notably, when the task rule was changed from FLL to FLR in the CLMF Rule-change mice, we observed a significant increase in ΔF/F0peak in regions including the olfactory bulb (OB), forelimb cortex (FL), hindlimb cortex (HL), and primary visual cortex (V1), as shown in Figure 7. We believe the decrease in ΔF/F0peak is unlikely to be driven by changes in movement, as movement amplitudes did not decrease significantly during these periods (Figure 7D CLMF Rule-change). However, the decrease in ΔF/F0peak followed the same trend as task latency (Figure 4C), suggesting that the decrease in ΔF/F0peak is especially prominent in trials where mice were prepared to make the fast movement.
These results suggest that motor learning led to less cortical activation across multiple regions, which may reflect more efficient processing of movement-related activity. In addition to ΔF/F0peak reflecting signs of potentially more efficient cortical signaling, intracortical GCAMP transients measured over days became more stereotyped in kinetics and were more correlated (to each other) as the task performance increased over the sessions (Figure 7E).
Analysis of pairwise correlations between cortical regions (referred to as seed pixel correlation maps) revealed distinct network activations during rest and trial periods (Figure 8). While the general structure of the correlation maps remained consistent between trial and rest periods, certain correlations, such as between V1 and RS, were heightened during task trials, whereas others, such as between M1, M2, FL, and HL, consistently increased over the training sessions (Figure 8).
To statistically examine the differences in seed pixel correlations during trials and rest periods, as well as how these correlations changed over training sessions (Day 1-10), we conducted a two-way repeated measures ANOVA (RM-ANOVA) on the seed pixel correlation maps for each day. The two variables for the RM-ANOVA were experimental condition (trial vs. rest) and session number (Day 1-10). This analysis generated two distinct matrices of Bonferroni-corrected p-values (Figure 8), one corresponding to each variable, which segregated the seed pixel correlations that were different between trial and rest periods and those that changed over the training sessions.
Distinct task- and reward-related cortical dynamics
In our closed-loop experiment, mice were trained to associate a specific body movement with a reward. Over the course of 10 training sessions, the mice progressively learned the task, as evidenced by increased task performance across sessions (Figure 4A). The acquisition of this learned behavior was consistent with previous findings on motor learning, where reinforcement and task repetition lead to the refinement of motor skills (20,21). Importantly, the reward was provided 1 s after a successful trial to study the timing and nature of cortical activation in response to both the task and the reward. Concurrent with behavioral training, widefield cortical activity was recorded observing distinct patterns of neural activation that evolved as the mice learned the task.
During the early sessions (days 1 to 3), cortical activity was observed to be spatially widespread and engaged multiple cortical regions. Temporally, the activity spanned both task-related and reward-related events, with no clear distinction between the two phases (Figure 7B, left). This broad activation pattern is consistent with the initial stages of learning, where the brain recruits extensive cortical networks to process novel tasks and integrate sensory feedback (22).
As the mouse performance improved in the later sessions (Days 8 to 10), the cortical activity became more segregated both spatially and temporally (Figure 7B, middle). This segregation was particularly notable in mice that received closed-loop feedback (Rule-change), where the spatiotemporal patterns of cortical activation were more closely aligned with the specific task and reward events. This transition from widespread to segregated activation is indicative of the brain’s optimization of neural resources as the task becomes more familiar, a phenomenon that has been reported in studies of skill acquisition and motor learning (17,18). Previous studies have shown that feedback, especially when provided in a temporally precise manner, can accelerate the cortical plasticity associated with learning (23). Overall, these findings highlight the importance of closed-loop feedback in motor learning and suggest that real-time neurofeedback can enhance the specificity of cortical representations, potentially leading to more efficient and robust learning outcomes.
Discussion
Flexible, cost-effective, and open-source system for a range of closed-loop experiments
We developed a highly adaptable, cost-effective, and open-source system tailored for a wide range of closed-loop neuroscience experiments (CLNF, CLMF). Our system is built on readily available hardware components, such as Raspberry Pi and Nvidia Jetson platforms, and leverages Python-based software for real-time data processing, analysis, and feedback implementation. The modular approach ensures that the system can be easily customized to meet the specific requirements of different experimental paradigms. Our study demonstrates the effectiveness of a versatile and cost-effective closed-loop feedback system for modulating brain activity and behavior in head-fixed mice. By integrating real-time feedback based on cortical GCaMP imaging and behavior tracking, we provide strong evidence that such closed-loop systems can be instrumental in exploring the dynamic interplay between brain activity and behavior. The system’s ability to provide graded auditory feedback and rewards in response to specific neural and behavioral events showcases its potential for advancing research in neurofeedback and behavior modulation.
The hardware backbone of our system is designed around the Raspberry Pi 4B+ and Nvidia Jetson platforms, chosen for their compactness, low cost, high computational power, and wide community support. These platforms have been demonstrated to be effective in neuroscience research, for several applications (24–27). By utilizing open-source software frameworks such as Python, the system can be modified and extended to incorporate additional functionalities or adapt to new experimental protocols.
Our Python-based software stack includes libraries for real-time data acquisition, signal processing, and feedback control. For instance, we utilize OpenCV for video processing and DeepLabCut-Live (15,16) for behavioral tracking, which are essential components in experiments requiring precise monitoring and feedback based on animal behavior. Additionally, we have integrated libraries for handling neural data streams, such as NumPy and SciPy, which facilitate the implementation of complex experimental designs (28) involving multiple data sources and feedback modalities. By integrating the system with LED drivers and opsin-expressing transgenic mouse lines, it is straightforward to achieve precise temporal control over neural activation, enabling the study of causal relationships between neural circuit activity and behavior (29).
The cost-effectiveness of our system is a significant advantage, making it accessible to a broader range of research labs, including those with limited funding. The use of off-the-shelf components and open-source software drastically reduces the overall cost compared to commercially available systems, which often require expensive proprietary hardware and software licenses (30,31). Furthermore, the open-source nature of our system promotes collaboration and knowledge sharing within the research community, as other labs can freely modify, improve, and distribute the system without any licensing restrictions.
The system’s flexibility is demonstrated by its successful application across various closed-loop experimental paradigms (CLNF, CLMF). For example, in our study, we utilized the system to implement real-time feedback based on intracellular calcium-induced fluorescence imaging in awake, behaving mice. The system provided auditory feedback in response to changes in cortical activation, allowing us to explore the role of real-time feedback in modulating both neural activity and behavior.
Importance of Closed-Loop Feedback Systems
Closed-loop feedback systems have gained recognition for their ability to modulate neural circuits and behavior in real time, an approach that aligns with the principles of motor learning and neuroplasticity. The ability of mice to learn and adapt to tasks based on cortical or behavioral feedback underscores the parallels between closed-loop systems and natural proprioceptive mechanisms, where continuous sensory feedback is crucial for motor coordination and spatial awareness. This study builds upon the foundational work of (32), who first explored the potential of closed-loop neurofeedback, and expands its application to modern neuroscience by incorporating optical brain-computer interfaces.
Our findings are consistent with previous research showing that rodents can achieve volitional control over externally represented variables linked to their behavior (1,3,4,33). Moreover, the rapid adaptation observed in mice when task rules were altered demonstrates the system’s capacity to facilitate learning and neuroplasticity, even when the conditions for achieving rewards are modified. The quick recovery of task performance after a rule change, as evidenced by the improved performance within days of training, highlights the robustness of the closed-loop feedback mechanism.
Neuroplasticity and Cortical Dynamics
The observation that cortical responses became more focused as mice learned the rewarded behavior aligns with established theories of motor learning, where neural efficiency improves with practice (18,34,35). As mice became more proficient in the task, the widespread cortical activity observed during the initial training sessions became more regionally localized, indicating more efficient neural processing. The reduction in ΔF/F0peak values across sessions suggests that the brain becomes more efficient at processing task-relevant information, a phenomenon consistent with the optimization of neural circuits observed in skilled motor learning (20,21).
The distinct spatiotemporal patterns of cortical activation observed in mice receiving closed-loop feedback further support the role of real-time feedback in enhancing cortical plasticity. The pronounced segregation of task-related and reward-related cortical dynamics in the later training sessions indicates that closed-loop feedback facilitates the refinement of neural circuits involved in motor learning. These findings are in line with previous studies that have demonstrated the importance of temporally precise feedback in accelerating cortical reorganization and enhancing learning outcomes (23).
Future Directions and Implications
Looking ahead, there are several promising directions for expanding the capabilities of our closed-loop feedback system. These include the integration of more advanced imaging techniques, such as multi-photon microscopy, to provide higher-resolution data on neural activity, as well as the incorporation of optogenetic stimulation to achieve more precise control over neural circuits. Additionally, the system’s scalability could be tested in larger animal models or even human subjects, potentially paving the way for translational applications in neurorehabilitation and brain-computer interfaces. The open-source nature of our system also encourages collaboration and knowledge sharing within the research community, fostering innovation and accelerating the development of new experimental tools. As more labs adopt and refine this system, we anticipate that it will contribute to a deeper understanding of the mechanisms underlying brain-behavior dynamics and the development of novel therapeutic interventions for neurological disorders.
In conclusion, our study highlights the significant potential of closed-loop feedback systems for advancing neuroscience research. By providing a flexible, cost-effective, and open-source platform, we offer a valuable tool for exploring the complex interactions between brain activity and behavior, with implications for both basic research and clinical applications.
Materials and methods
Animals
Mouse protocols were approved by the University of British Columbia Animal Care Committee (ACC) and followed the Canadian Council on Animal Care and Use guidelines (protocol A22-0054). A total of 56 mice (postnatal 104-140) were used in this study: 26 female and 30 male transgenic C57BL/6 mice expressing GCaMP6s were used. CLNF experiments (n=40, 17 females, 23 males) were done with tetO-GCaMP6s x CAMK tTA (Wekselblatt et al., 2016), and CLMF experiments (n=16, 9 females, 7 males) were done with Ai94 from the Allen Institute for Brain Science, crossed to Emx1–cre and CaMK2-tTA line (Jackson Labs) (Madisen et al., 2015). Mice were housed in a conventional facility in plastic cages with micro-isolator tops and kept on a normal 12 hr. light cycle with lights on at 7 AM. Most experiments were performed toward the end of the mouse light cycle. Mice that were unable to achieve a success rate of 70% after 7 days of training in CLNF experiments were excluded from the study (total 6 mice).
Animal surgery, chronic transcranial window preparation
Animals were anesthetized with isoflurane, and a transcranial window was installed as previously described (36, 37) and in an amended and more extensive protocol described here. A sterile field was created by placing a surgical drape over the previously cleaned surgical table, and surgical instruments were sterilized with a hot bead sterilizer for 20 s (Fine Science Tools; Model 18000–45). Mice were anesthetized with isoflurane (2% induction, 1.5% maintenance in air) and then mounted in a stereotactic frame with the skull level between lambda and bregma. The eyes were treated with eye lubricant (Lacrilube; www.well.ca) to keep the cornea moist, and body temperature was maintained at 37°C using a feedback-regulated heating pad monitored by a rectal probe. Lidocaine (0.1 ml, 0.2%) was injected under the scalp, and mice also received a 0.5 ml subcutaneous injection of a saline solution containing buprenorphine (2 mg/ml), atropine (3 μg/ml), and glucose (20 mM). The fur on the head of the mouse (from the cerebellar plate to near the eyes) was removed using a fine battery-powered beard trimmer, and the skin was prepared with a triple scrub of 0.1% Betadine in water followed by 70% ethanol. Respiration rate and response to toe pinch were checked every 10–15 min to maintain the surgical anesthetic plane.
Before starting the surgery, a cover glass was cut with a diamond pen (Thorlabs, Newton, NJ, USA; Cat#: S90W) to the size of the final cranial window (∼9 mm diameter). A skin flap extending over both hemispheres approximately 3 mm anterior to bregma and to the posterior end of the skull and down lateral was cut and removed. A #10 scalpel (curved) and sterile cotton tips were used to gently wipe off any fascia or connective tissue on the skull surface, making sure it was completely clear of debris and dry before proceeding. The clear version of C and B-Metabond (Parkell, Edgewood, NY, USA; Product: C and B Metabond) dental cement was prepared by mixing 1 scoop of C and B Metabond powder (Product: S399), 7 drops of C and B Metabond Quick Base (Product: S398), and one drop of C and B Universal catalyst (Product: S371) in a ceramic or glass dish (do not use plastic). Once the mixture reaches a consistency that makes it stick to the end of a wooden stir stick, a titanium fixation bar (22.2 × 2.7 × 3.2 mm) was placed so that there was a 4 mm posterior space between the bar edge and bregma, by applying a small amount of dental cement and holding it pressed against the skull until the cement partially dried (1–2 min). With the bar in place, a layer of dental adhesive was applied directly on the intact skull. The precut cover glass was gently placed on top of the mixture before it solidified (within 1 min), taking care to avoid bubble formation. If necessary, extra dental cement was applied around the edge of the cover slip to ensure that all the exposed bone was covered, and that the incision site was sealed at the edges. The skin naturally tightens itself around the craniotomy, and sutures are not necessary. The mixture remains transparent after it solidifies, and one should be able to clearly see large surface veins and arteries at the end of the procedure. Once the dental cement around the coverslip is completely solidified (up to 20 min), the animal received a second subcutaneous injection of saline (0.5 ml) with 20 mM of glucose and was allowed to recover in the home cage with an overhead heat lamp and intermittent monitoring (hourly for the first 4 hr. and every 4–8 hr. thereafter for activity level). Then, the mouse was allowed to recover for 7 days before task training.
Water deprivation and habituation to experiments
Around 10–21 days after the surgery, the animals were placed on a schedule of water deprivation. Given the variation in weight due to initial ad libitum water consumption, the mouse weight was defined 24 hr. after the start of water restriction. If mice did not progress well through training, they were still given up to 1 ml of water daily, there was also a 15% maximal weight loss criterion used for supplementation (see detailed protocol). All mice were habituated for 5 days before data collection. Awake mice were head-fixed and placed in a dark imaging chamber for training and data collection for each session. High performing animals were able to maintain body weight and gain weight towards pre-surgery and water restriction values.
CLNF setup and behavior experiments
An imaging rig was developed using two Raspberry Pi 4B+ single-board computers, designated as “brain-pi” (master device for initiating all recordings) for widefield GCaMP imaging and “behavior-pi” (slave device waiting for a trigger to start recording) for simultaneous behavior recording. Both devices were connected to the internet via Ethernet cables and communicated with each other through 3.3 V transistor-transistor logic (TTL) via general-purpose input outputs (GPIOs). To synchronize session initiation, GPIO pin #17 on the brain-pi (configured as an output) was connected to GPIO pin #17 on the behavior-pi (configured as an input), allowing the brain-pi to send a TTL signal to the behavior-pi at the start of each session.
Additional hardware components were integrated into the brain-pi setup. GPIO pin#27 (output) was connected to a solenoid (Gems Sensor, 45M6131) circuit to deliver water rewards, while GPIO pin#12 (output) was linked to a buzzer (Adafruit product #1739) positioned under the head-fixing apparatus to signal trial failures. GPIO pin#21 (output) was used to trigger a LED driver controlling both short blue (447.5 nm) and long blue (470 nm) LEDs, which were essential for cortical imaging. A speaker was connected to the brain-pi’s 3.5 mm audio jack to provide auditory output (PulseAudio driver) during the experiments. Both the brain-pi and behavior-pi devices were equipped with compatible external hard drives, connected via USB 3.0 ports, to store imaging and behavioral data, ensuring reliable data capture throughout the experimental sessions.
Mice with implanted transcranial windows on the dorsal cortex were head-fixed in a transparent acrylic tube (1.5-inch outer diameter, 1 ⅛ inch inner diameter) and placed such that the imaging camera (RGB Raspberry Pi Camera, OmniVision OV5647 CMOS sensor) was above the window, optimally focused on the cortical surface for GCaMP imaging. The GCaMP imaging cameras had lenses with a focal length of 3.6 mm and a field of view of ∼10.2 × 10.2 mm, leading to a pixel size of ∼40 μm, and were equipped with triple-bandpass filters (Chroma 69013m), which allowed for the separation of GCaMP epifluorescence signals and 447 nm reflectance signals into the green and blue channels, respectively. The depth of field (∼3 mm) was similar to previous reports (38), which provided both a large focal volume over which to collect fluorescence and made the system less sensitive to potential changes in the z-axis position. To reduce image file size, we binned data at 256 × 256 pixels on the camera for brain imaging data. Brain images were saved as 8-bit RGB stacks in HDF5 file format. We manually fixed the camera frame rate to 15 Hz, turned off automatic exposure and auto white balance, and set white balance gains to unity. For both green epifluorescence and blue reflection channels, we adjusted the intensity of illumination so that all values were below 180 out of 256 grey levels (higher levels increase the chance of cross-talk and saturation).
Mesoscale GCaMP imaging can often be performed with single-wavelength illumination (39,40). However, in this experiment, we utilized dual-LED illumination of the cortex (24). One LED (short-wavelength blue, 447 nm Royal Blue Luxeon Rebel LED SP-01-V4 paired with a Thorlabs FB 440-10 nm bandpass filter) monitored light reflectance to account for hemodynamic changes (41), while the second LED (long-wavelength blue, 470 nm Luxeon Rebel LED SP-01-B6 combined with a Chroma 480/30 nm filter) was used to excite GCaMP for green epifluorescence. Both signals were captured simultaneously using an RGB camera. For each mouse, light from both the excitation and reflectance LEDs was channeled into a single liquid light guide, positioned to illuminate the cortex (Figure 2v) (24). Further specifics are outlined in the accompanying Parts List and assembly instructions document. A custom-built LED driver, controlled by a Raspberry Pi, activated each LED at the beginning of the session and deactivated them at the session’s end. This on-off illumination shift was later used in post hoc analysis to synchronize frames from the brain and behavior cameras.
Before initiating the experimental session, key session parameters were configured in the config.ini file, as detailed in the “Key Configuration Parameters” section below. Additionally, we ensured that the waterspout was positioned appropriately for the mouse to access and consume the reward. To launch the experiment, open a terminal on the brain-pi (master) device, ensuring that the current directory is set to clopy/brain/. The experiment can be started by entering the command python3 <script_name>.py, where <script_name> corresponds to the appropriate Python script for the session (Supplementary Figure 6). We provide two pre-defined scripts in the codebase: one for 1ROI and another for 2ROI experiments. Both scripts are functionally similar. Upon execution, the script prompts for the “mouse_id,” which can be entered as an array of characters, followed by pressing ‘Enter.’ Upon initialization, two preview windows appear. The first window displays live captured images with intensity values overlaid in the green and blue channels, which allow for adjustments to LED brightness levels. The second window displays real-time ΔF/F0 values overlaid on cortical locations, enabling brain alignment checks. These preview windows are used to assess imaging quality and ensure appropriate settings before starting the experiment (Supplementary Figure 6). Once all settings are confirmed, pressing the ‘Esc’ key starts the experiment. At this point, only one window displaying incoming images is shown. The experiment begins with a rest period of duration specified in the config.ini file, followed by alternating trial and rest periods. Data acquired during the session are saved in real-time on the device. For more detailed and the latest instructions on system setup, please refer to the project’s GitHub page.
CLMF setup and behavior experiments
An additional imaging rig was developed utilizing an Nvidia-Jetson Orin device (8-core ARM Cortex CPU, 2048 CUDA cores, 64 GB memory), which served as the master device responsible for triggering all recordings. The Nvidia-Jetson was connected to an Omron Sentech STC-MCCM401U3V USB3 Vision camera for behavior imaging and real-time pose tracking. A personal computer (serving as the slave device) running EPIX software and frame grabber (PIXCI® E4 Camera Link Frame Grabber) was connected to a Pantera TF 1M60 CCD camera (Dalsa) for widefield GCaMP imaging. Both the Nvidia-Jetson and the EPIX PC were linked via transistor-transistor logic (TTL) for communication. For session synchronization, the GPIO pin #17 on the Nvidia-Jetson, configured as an output, was connected to the trigger input pin of the EPIX PC. This configuration enabled the Nvidia-Jetson to send a TTL signal to initiate frame capture on the EPIX system at the start of each session. Additionally, this same pin was connected to an LED driver to trigger the activation of a long blue LED (470 nm) for GCaMP imaging simultaneously.
Several hardware components were integrated into the Nvidia-Jetson setup to support experimental protocols. GPIO pin #13 (output) was linked to a solenoid circuit to deliver water rewards, while GPIO pin #7 (output) was connected to a buzzer placed under the head-fixation apparatus to signal trial failures. Auditory feedback during the experiments was provided through a speaker connected to the Nvidia-Jetson’s audio output. Both the Nvidia-Jetson and EPIX PC were equipped with sufficient storage space to ensure the reliable capture and storage of behavioral video recordings and widefield image stacks, respectively, throughout the experimental sessions.
Mice, implanted with a transcranial window over the dorsal cortex, were head-fixed in a custom-made transparent, rectangular chamber (dimensions provided). The chamber was designed using CAD software and fabricated from a 3 mm-thick acrylic sheet via laser cutting. Complete model files, acrylic sheet specifications, and assembly instructions are available on the GitHub repository. A mirror was positioned at the bottom and front of the chamber (Figure 2B) to allow multiple views of the mouse for improved tracking accuracy. The Omron Sentech camera was equipped with an infrared (IR)-only filter to exclusively capture IR illumination, effectively blocking other light sources and ensuring consistent behavioral imaging for accurate pose tracking. For widefield GCaMP imaging, the Dalsa camera was equipped with two front-to-front lenses (50 mm, f/1.435 mm and f/2; Nikon Nikkor) and a bandpass emission filter (525/36 nm, Chroma). The 12-bit images were captured at a frame rate of 30 Hz (exposure time of 33.3 ms) with 8×8 on-chip spatial binning (resulting in 128×128 pixels) using EPIX XCAP v3.8 imaging software. The cortex was illuminated using a blue LED (473 nm, Thorlabs) with a bandpass filter (467–499 nm) to excite calcium indicators, and the blue LED was synchronized with frame acquisition via TTL signaling. GCaMP fluorescence image stacks were automatically saved to disk upon session completion.
Prior to starting a session, key experimental parameters were configured within the config.ini file (see “Key Configuration Parameters” below for details). The waterspout was positioned close to the mouse to allow easy access for licking and consuming rewards. To initiate the experiment on the Nvidia-Jetson (master device), the terminal’s current directory was set to clopy/behavior/. The experiment was launched by executing the command python3 <script_name>.py, where <script_name> corresponds to the Python script responsible for running the experiment (Supplementary Figure 6). The provided script maps the speed of a specific control-point (FLL_bottom) to the audio feedback and executes 60 trials. Once launched, the program prompts for the “mouse_id,” which could be entered as an array of characters, followed by pressing Enter. A preview window displaying a live behavioral view was then presented, allowing for adjustments of IR brightness levels and the waterspout (Supplementary Figure 6). Once imaging quality and settings were confirmed to be optimal, pressing the “esc” key started the experiment. During the experiment, the tracked points were overlaid on the real-time video, and the session alternated between rest and trial periods. All acquired data were saved locally on the Nvidia-Jetson during the experiment and automatically transferred to the EPIX PC after completion. Additional setup instructions and system details can be found on the associated GitHub page.
Key configuration parameters
Prior to running the experiment script, session parameters were set in the config.ini file under a designated configuration section. This file contains multiple config sections, each tailored to a different experiment type, such as brain-pi or behavior-pi. The appropriate config section is specified in the experiment script. Key parameters in this section include:
vid_source: Specifies the class responsible for the image stream, which may come from a programmable camera or another video source.
data_root: Directory path for saving the recorded sessions and current configuration.
raw_image_file: File name for saving the image stream.
resolution: Image stream resolution in (x, y) pixels.
framerate: Number of frames per second from the image stream.
awb_mode: Auto-white-balance mode (only used in CLNF, True for Behavior-Pi, False for Brain-Pi).
shutter_speed: Sets the camera sensor exposure.
dff_history: The duration (in seconds) used to calculate ΔF/F0, only used in CLNF.
ppmm: Pixels-per-mm value at the focal plane of the camera.
bregma: Y, X pixel coordinates of the bregma on the dorsal cortex in the image frame, only used in CLNF.
seeds_mm: List of cortical locations for inclusion in the closed-loop training rule, each defined by a name and coordinates relative to bregma (in mm), only used in CLNF.
roi_operation: Specifies the ROI(s) and operation (+ or -) for closed-loop training.
roi_size: Size of the ROI(s) in mm.
n_tones: Number of distinct audio frequencies for graded auditory feedback.
reward_delay: Delay (in seconds) after crossing the threshold.
reward_threshold: Threshold value in terms of ΔF/F0, determined from baseline sessions.
adaptive_threshold: A setting for adjusting the reward threshold. If set to 0, the threshold remains constant throughout the session; if set to 1, the threshold increases or decreases by 0.02 steps based on the reward rate.
total_trials: Total number of trials in the session.
max_trial_dur: Maximum trial duration (in seconds).
success_rest_dur: Rest duration after a successful trial (in seconds).
fail_rest_dur: Rest duration after a failed trial (in seconds).
initial_rest_dur: Rest period at the beginning of the session before the first trial (in seconds).
summary_file: File name to save the experiment summary as comma-separated values (CSV).
summary_header: List of variable names to be saved in the summary file.
dlc_model_path: Path of the DeepLabCut-Live model for real time pose tracking (used only in CLMF)
control_point: name of the tracked point for closed-loop feedback (used only in CLMF).
speed_threshold: Speed threshold for a success in a trial and receive reward (used only in CLMF).
Further details on these parameters can be accessed via the GitHub repository.
CLoPy platform
Closed-Loop Feedback Training System (CLoPy) is an open-source software and hardware system implemented in the Python (>=3.8) programming language. This work is accompanied by a package to replicate the system, reproduce figures in this publication, and an extensive supplemental guide with full construction illustrations and parts lists to build the platform used. See https://github.com/pankajkgupta/clopy for details and accompanying acquisition and analysis code. We also provide model files for machined and 3D-printed parts in the repository; links to neural and behavioral data can also be found at the federated research data repository-https://doi.org/10.20383/102.0400.
While the presented CLNF experiments were conducted on a Raspberry Pi 4B+ device, the system can be used on any other platform where the Python runtime environment is supported. Similarly, CLMF experiments were conducted on an Nvidia Jetson Orin device, but it can be deployed on any other device with a GPU for real-time inference. For any programmable camera to be used with the system, one can implement a wrapper Python class that implements the CameraFactory interface functions for integration with the system.
Graded feedback (Online ΔF/F0-Audio mapping)
The auditory feedback was proportional to the magnitude of the neural activity. We translated fluorescence levels into the appropriate feedback frequency and played the frequency on speakers mounted on two sides of the imaging platform. Frequencies used for auditory feedback ranged from 1 to 22 kHz in quarter-octave increments (1,42). When a target was hit, a circuit driven solenoid delivered a reward to mice.
To map the fluorescence activity F to the quarter-octave index n, we used linear scaling. Let Fminand Fmaxbe the minimum and maximum fluorescence values.nmaxThe maximum number of quarter-octave steps between 1 kHz and 22 kHz.
The number of quarter-octave steps between 1 kHz and 22 kHz can be calculated as:
The quarter-octave index n can be mapped linearly from fluorescence activity F as:
Thus, the final equation mapping fluorescence activity to audio frequency is:
This equation allows us to map any fluorescence activity F within the range [Fmin, Fmax] to a corresponding frequency within the 1–22 kHz range, in quarter-octave increments.
Checking the dynamic range of graded auditory feedback
To assess the dynamic range of audio signals from the speakers, we used an Ultra microphone (Dodotronic) to record feedback signals generated during a session, detect the frequencies, and compare the detected frequencies through our speakers to what was commanded through the program. We verified a linear relationship between 1 and 22 kilohertz (Supplementary figure 2). Analysis of GCAMP imaging experiments indicated that baseline activity values were associated with mapped sounds in the 6±4 kHz range. At reward points, which are threshold-dependent (Delta F over F values), the commanded auditory feedback values were significantly higher (18±2 kHz range).
Determining reward threshold based on a baseline session
Before starting the experiments, and after the habituation period, a baseline session (day 0) of the same duration as the experiments is recorded. This session is similar to the experimental sessions in the coming days, except that they do not receive any rewards. Offline analysis of this session is used to establish a threshold value for the target activity (see the target activity section to read more). In brief, for 1 ROI experiments, target activity is the average ΔF/F0 activity in that ROI. For 2 ROI experiments, target activity is based on the specified rule in the config.ini file. For example, target activity for the rule “ROI1-ROI2” would be “average ΔF/F0 activity in ROI1 - average ΔF/F0 activity in ROI2.”
Determining ROI(s) for changes in CLNF task rules
Dorsal cortical widefield activity is dynamic, and ongoing spatiotemporal motifs involve multiple regions changing activity. Choosing ROI(s) for CLNF has some caveats and requires some considerations before choosing. It was relatively straightforward for initial training experiments where baseline (day 0) sessions were used to establish a threshold for the selected ROI on day 1. Mice learn to modulate the selected ROI over the training sessions. Interestingly, other cortical ROIs were also changing along with the target ROI as mice were learning the task. We needed to be careful when changing the ROI on day 11 because if we changed to an ROI that also changes along with the previous ROI, mice could keep getting rewards without realizing any change. To address this issue, we analyzed the neural data from day 1 to day 10 and found potential ROIs for which the threshold crossings did not increase significantly or were not on par with the previous ROI, and one of these ROIs was chosen for the rule change.
Online ΔF/F0 calculation
In calcium imaging, ΔF/F0 is often used to represent the change in fluorescence relative to a baseline fluorescence (F0), which helps in normalizing the data. For CLNF real-time feedback, we computed ΔF/F0 online, frame by frame.
Let F(t) represent the fluorescence signal at time.t. A running baseline F0(t)is estimated by applying a sliding window to the fluorescence signal to calculate a moving average over a window of size N.
Once the running baseline.F0(t)is computed, the ΔF/F₀ at time t is calculated as:
Each incoming frame contained both a green channel (capturing GCaMP6s fluorescence) and a short blue channel (representing blood volume reflectance) (43–45). These frames were appended to a doubly-ended queue (deque), a Python data structure, for running baseline F0(t)The length of the deque.N was defined by the product of two parameters from the configuration file: “dff_history” (in seconds) and “framerate” (in frames per second), which determined the number of frames used for the running baseline. For the CLNF experiments, we used a dff_history of 6 s and a framerate of 15 fps, resulting in a deque length of 90 frames. When the deque reached full capacity, appending a new frame automatically removed the oldest frame, ensuring an updated running ΔF/F0 throughout the experiment, regardless of session duration. This method effectively avoided memory limitations over time. For each channel (green and blue), the running F0 was subtracted from each incoming frame to obtain ΔF, followed by ΔF/F0 calculations, applied per pixel as a vectorized operation in Python. To correct for hemodynamic artifacts (43), we subtracted the blue excitation and epifluorescence channel ΔF/F0 from the green channel ΔF/F0.
It is important to note that while blood volume reflectance is typically captured using green light (43), we used short blue light due to technical constraints associated with the Raspberry Pi camera’s rolling shutter which made strobing infeasible. The short blue light (447 nm) with a 440 ± 5 nm filter is close to the hemoglobin isosbestic point and has been shown to correlate well with the 530 nm green light signal as a proxy for hemodynamic activity (24,25). Additionally, the 447 nm LED would be expected to produce minimal green epifluorescence at the low power settings used in our experiments (46). Previous studies have evaluated and compared the performance of corrected versus uncorrected signals using this method (25,41).
Offline ΔF/F0 calculation
For offline analysis of GCaMP6s fluorescence in CLNF experiments, the green and blue channels (43–45) were converted to ΔF/F0 values. For each channel, a baseline image (F0) was computed by averaging across all frames of the recording session. The F0 was then subtracted from each individual frame producing a difference image (ΔF). This difference was divided by F0, resulting in the fractional change in intensity (ΔF/F0) for each pixel as a function of time. To further correct for hemodynamic artifacts (Ma et al., 2016), the blue channel reflected light ΔF/F0 signal, reflecting blood volume changes, was subtracted from the green channel ΔF/F0 signal, isolating the corrected GCaMP6s fluorescence response from any potential confounding vascular contributions.
Offline analysis of GCaMP6s fluorescence in CLMF experiments involved processing the green epifluorescence channel. We did not collect the hemodynamic signal in this experiment because we intended to employ the second channel for optogenetic stimulation (not included in this article). A baseline image (F0) was computed by averaging across all frames of the recording session. The F0 was then subtracted from each individual frame, producing a difference image (ΔF). This difference was divided by F0, resulting in the fractional change in intensity (ΔF/F0) for each pixel as a function of time.
Seed-pixel correlation matrices
Widefield cortical image stacks were registered to the Allen Mouse Brain Atlas (47,48) and segmented into distinct regions, including the olfactory bulb (OB), anterolateral motor cortex (ALM), primary motor cortex (M1), secondary motor cortex (M2), sensory forelimb (FL), sensory hind limb (HL), barrel cortex (BC), primary visual cortex (V1), and retrosplenial cortex (RS) in both the left and right cortical hemispheres. The average activity in a 0.4 x 0.4 mm² area (equivalent to a 10 x 10 pixel region) centered on these regions (also referred to as seed pixels) was calculated, representing the activity within each region. These signals were then used to generate temporal plots. The temporal plots were epoched into trial and rest conditions, and correlations between the regions were computed, resulting in correlation matrices for each condition across each day. Each element of the matrix represented the pairwise correlation between two cortical regions. For each pair of cortical regions, we obtained correlation values during both trial and rest periods for every session, creating a time series (over sessions) with two conditions (trial and rest).
Data and code availability
The source data used in this paper is available here- https://doi.org/10.20383/102.0400. Code to replicate the system, recreate the figures, and associated pre-processed data are publicly available and hosted on GitHub - https://github.com/pankajkgupta/clopy. Any additional information required to reanalyze the data reported in this work is available from the lead contact upon request.
Statistics
Various statistical tests were performed to support the analysis presented in accompanying figures. For p-value matrices in Figure 8B, 8C, a two-way repeated-measures ANOVA with Bonferroni post-hoc correction was used to test the significance of correlation changes across two factors: sessions (days 1– 10) and condition (trial vs. rest). Significant changes in the correlation matrices along these two variables showed complementary patterns (Figure 8). Changes across sessions involved the bilateral M1, M2, FL, HL, and BC regions, while changes between the trial and rest conditions involved bilateral OB, ALM, V1, and RS regions.
A list of statistical tests used in a figure, its purpose and data used are summarized in the table below.
CLNF rules
The task rules for CLNF experiments involved cortical activity at selected locations. These locations were standard Allen Mouse Brain Atlas (47,48) CCF coordinates. The table below lists all the task rules we tested and what they meant in the context of CLNF experiments.
CLMF rules
The task rules for CLMF experiments involved points on body parts that were tracked (listed in results section). The table below lists the task rules (i.e. tracked points) we tested and what they meant in the context of CLMF experiments.
Acknowledgements
This work was supported by Canadian Institutes of Health Research (CIHR) foundation grant FDN-143209 and project grant PJT-180631 (to T.H.M.). T.H.M. was also supported by the Brain Canada Neurophotonics Platform, a Heart and Stroke Foundation of Canada grant in aid, the National Science and Engineering Council of Canada (NSERC; GPIN-2022-03723), and a Leducq Foundation grant. This work was supported by resources made available through the Dynamic Brain Circuits cluster and the NeuroImaging and NeuroComputation Centre at the UBC Djavad Mowafaghian Centre for Brain Health (RRID SCR_019086) and made use of the DataBinge forum, and computational resources and services provided by Advanced Research Computing (ARC) at the University of British Columbia. We thank Pumin Wang and Cindy Jiang for surgical assistance, Jamie Boyd and Jeffrey M LeDue for technical assistance.
References
- 1.Volitional modulation of optically recorded calcium signals during neuroprosthetic learningNat Neurosci 17:807–9https://doi.org/10.1038/nn.3712
- 2.Operant Conditioning of Cortical Unit Activity [Internet]Science 163:955–8https://doi.org/10.1126/science.163.3870.955
- 3.The sensory representation of causally controlled objectsNeuron 109:677–89https://doi.org/10.1016/j.neuron.2020.12.001
- 4.Rapid Integration of Artificial Sensory Feedback during Operant Conditioning of Motor Cortex NeuronsNeuron 93:929–39https://doi.org/10.1016/j.neuron.2017.01.023
- 5.Closed-loop functional optogenetic stimulationNat Commun 9https://doi.org/10.1038/s41467-018-07721-w
- 6.Closed-loop theta stimulation in the orbitofrontal cortex prevents reward-based learningNeuron 106:537–47https://doi.org/10.1016/j.neuron.2020.02.003
- 7.Closed-loop optogenetic control of thalamus as a tool for interrupting seizures after cortical injuryNat Neurosci 16:64–70https://doi.org/10.1038/nn.3269
- 8.Real-time closed-loop control in a rodent model of medically induced coma using burst suppressionAnesthesiology 119:848–60https://doi.org/10.1097/ALN.0b013e31829d4ab4
- 9.Closed-loop deep brain stimulation is superior in ameliorating parkinsonismNeuron 72:370–84https://doi.org/10.1016/j.neuron.2011.08.023
- 10.Pre-frontal control of closed-loop limbic neurostimulation by rodents using a brain-computer interfaceJ Neural Eng 11https://doi.org/10.1088/1741-2560/11/2/024001
- 11.Closed-loop stimulation using a multiregion brain-machine interface has analgesic effects in rodentsSci Transl Med 14https://doi.org/10.1126/scitranslmed.abm5868
- 12.Volitional modulation of neuronal activity in the external globus pallidus by engagement of the cortical-basal ganglia circuitJ Physiol [Internet]
- 13.Volitional Modulation of Primary Visual Cortex Activity Requires the Basal GangliaNeuron 97:1356–68https://doi.org/10.1016/j.neuron.2018.01.051
- 14.Real-time selective markerless tracking of forepaws of head fixed mice using deep neural networkseNeuro 7https://doi.org/10.1523/ENEURO.0096-20.2020
- 15.Real-time, low-latency closed-loop feedback using markerless posture trackingElife 9https://doi.org/10.7554/eLife.61909
- 16.DeepLabCut: markerless pose estimation of user-defined body parts with deep learningNat Neurosci 21:1281–9https://doi.org/10.1038/s41593-018-0209-y
- 17.Transformation of Cortex-wide Emergent Properties during Motor LearningNeuron 94:880–90https://doi.org/10.1016/j.neuron.2017.04.015
- 18.Multiple dynamic representations in the motor cortex during sensorimotor learningNature :484–7395https://doi.org/10.1038/nature11039
- 19.Global Representations of Goal-Directed Behavior in Distinct Cell Types of Mouse NeocortexNeuron 94:891–907https://doi.org/10.1016/j.neuron.2017.04.017
- 20.Principles of sensorimotor learningNat Rev Neurosci 12:739–51https://doi.org/10.1038/nrn3112
- 21.Motor LearningCompr Physiol [Internet] 9:613–63https://doi.org/10.1002/cphy.c170043
- 22.Emergence of reproducible spatiotemporal activity during motor learningNature 510:263–7https://doi.org/10.1038/nature13235
- 23.Emergence of a stable cortical map for neuroprosthetic controlPLoS Biol 7https://doi.org/10.1371/journal.pbio.1000153
- 24.Meso-Py: Dual Brain Cortical Calcium Imaging in Mice during Head-Fixed Social Stimulus PresentationeNeuro 10https://doi.org/10.1523/ENEURO.0096-23.2023
- 25.Automated task training and longitudinal monitoring of mouse mesoscale cortical circuits using home cagesElife 9https://doi.org/10.7554/eLife.55964
- 26.Individualized tracking of self-directed motor learning in group-housed mice performing a skilled lever positioning task in the home cageJ Neurophysiol 119:337–46https://doi.org/10.1152/jn.00115.2017
- 27.A Raspberry Pi-Based Traumatic Brain Injury Detection System for Single-Channel ElectroencephalogramSensors 21https://doi.org/10.3390/s21082779
- 28.Open-source, Python-based, hardware and software for controlling behavioural neuroscience experimentsElife 11https://doi.org/10.7554/eLife.67846
- 29.Light Up the Brain: The Application of Optogenetics in Cell-Type Specific Dissection of Mouse Brain CircuitsFront Neural Circuits 14https://doi.org/10.3389/fncir.2020.00018
- 30.Open science challenges, benefits and tips in early career and beyondPLoS Biol 17https://doi.org/10.1371/journal.pbio.3000246
- 31.The Future Is Open: Open-Source Tools for Behavioral Neuroscience ResearcheNeuro 6https://doi.org/10.1523/ENEURO.0223-19.2019
- 32.Operant conditioning of cortical unit activityScience :163–3870https://doi.org/10.1126/science.163.3870.955
- 33.Recent advances in neural dust: towards a neural interface platformCurr Opin Neurobiol 50:64–71https://doi.org/10.1016/j.conb.2017.12.010
- 34.Circuit Mechanisms of Sensorimotor LearningNeuron 92:705–21https://doi.org/10.1016/j.neuron.2016.10.029
- 35.Long-term depression induced by sensory deprivation during cortical map plasticity in vivoNat Neurosci 6:291–9https://doi.org/10.1038/nn1012
- 36.Intact skull chronic windows for mesoscopic wide-field imaging in awake miceJ Neurosci Methods 267:141–9https://doi.org/10.1016/j.jneumeth.2016.04.012
- 37.Mesoscale transcranial spontaneous activity mapping in GCaMP3 transgenic mice reveals extensive reciprocal connections between areas of somatomotor cortexJ Neurosci 34:15931–46https://doi.org/10.1523/JNEUROSCI.1818-14.2014
- 38.In vivo large-scale cortical mapping using channelrhodopsin-2 stimulation in transgenic mice reveals asymmetric and reciprocal relationships between cortical areasFront Neural Circuits 6https://doi.org/10.3389/fncir.2012.00011
- 39.Spatiotemporal refinement of signal flow through association cortex during learningNat Commun 11https://doi.org/10.1038/s41467-020-15534-z
- 40.Virtual reality-based real-time imaging reveals abnormal cortical dynamics during behavioral transitions in a mouse model of autismCell Rep 42https://doi.org/10.1016/j.celrep.2023.112258
- 41.Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neuronsElife 6https://doi.org/10.7554/elife.19976
- 42.Early experience impairs perceptual discriminationNat Neurosci 10:1191–7https://doi.org/10.1038/nn1941
- 43.Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approachesPhilos Trans R Soc Lond B Biol Sci :371–1705https://doi.org/10.1098/rstb.2015.0360
- 44.Large-scale imaging of cortical dynamics during sensory perception and behaviorJ Neurophysiol 115:2852–66https://doi.org/10.1152/jn.01056.2015
- 45.Separation of hemodynamic signals from GCaMP fluorescence measured with wide-field imagingJ Neurophysiol 123:356–66https://doi.org/10.1152/jn.00304.2019
- 46.Thy1-GCaMP6 transgenic mice for neuronal population imaging in vivoPLoS One 9https://doi.org/10.1371/journal.pone.0108697
- 47.The Allen Mouse Brain Common Coordinate Framework: A 3D reference atlasCell 181:936–53https://doi.org/10.1016/j.cell.2020.04.007
- 48.Enhanced and unified anatomical labeling for a common mouse brain atlas [Internet]bioRxiv https://doi.org/10.1101/636175
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
Copyright
© 2025, Pankaj K Gupta & Timothy H Murphy
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 11
- downloads
- 0
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.