Peer review process
Not revised: This Reviewed Preprint includes the authors’ original preprint (without revision), an eLife assessment, public reviews, and a provisional response from the authors.
Read more about eLife’s peer review process.Editors
- Reviewing EditorPhilip BoonstraUniversity of Michigan, Ann Arbor, United States of America
- Senior EditorMichael FrankBrown University, Providence, United States of America
Reviewer #1 (Public review):
Summary:
The authors aimed to extend a prior fiber photometry analysis process they developed by incorporating the new ability to determine instantaneous, within trial, relationships between the photometry signal and continuously changing variables. They present solid evidence via simulations and example use cases from published datasets highlighting that their approach can capture instantaneous relationships. Overall, while they make a compelling case that this approach is less biased and more insightful, the implementation for many experimentalists remains challenging enough and may limit widespread adoption by the community.
Strengths:
This work builds on prior efforts to analyze photometry signals in a less biased and more statistically sound way. This work incorporates a very important aspect by avoiding the need to summarize individual trials with singular behavioral variables and instead allows for interactions with continuously changing variables to be investigated. The knowledge and expertise of the authors and the presentation provide strong validity and strength to the work. Examples from prior studies in the field are a necessary and important component of the work.
Weaknesses:
While use cases are provided from prior data, a clearer presentation of how common implementations in the field are performed (i.e. GLM) and how one could alternatively use the cFLMM approach would help. Otherwise, most may continue using common approaches of Pearson's correlations and GLMs.
Reviewer #2 (Public review):
The paper presents a regression-based approach for analysing fiber photometry data termed Concurrent Functional Mixed Models (cFLMMs). The approach works by fitting linear mixed effect models separately to each time point in trial aligned data, then applying smoothing to the model coefficients (betas), and computing confidence intervals. The method extends the authors previous work on using FLMMs for photometry data analysis by allowing for the inclusion of predictors whose value changes across timepoints within a trial, rather than just from trial to trial. As fiber photometry is a rapidly expanding field, developing principled methods to analyse photometry data is valuable, particularly as the authors have released an R package that implements their method to facilitate their use by other groups. The basic FLMM approach for using mixed effects models to analyse trial aligned photometry data, detailed by the authors in their previous manuscript (Loewinger et al. 2025, doi: 10.7554/eLife.95802) appears valuable. The aim of incorporating variables that change within trial into this framework is interesting, and the technical implementation appears to be rigorous. However, I have some reservations as to whether the way in which variables that change within trial have been integrated into the analysis framework is likely to be widely useful, and hence how impactful the additional functionality of cFLMM relative to the previously published FLMM will be.
In the original FLMM approach, where predictors change only from trial-to-trial, fitting separate regressions at each timepoint generates a timeseries of betas is for each predictor, indicating when and how the predictor explained variance across the trial. This makes a lot of sense and is widely used in neuroscience data analysis. In extending this approach to incorporate variables that change within trial, the authors have used the same method of fitting separate regression models at each timepoint, to obtain a timeseries of betas for each predictor. It is less clear that this approach makes sense for variables that change within trial. This is because the resulting betas only capture how variation in the predictor across trials at a given timepoint explains variance in the signal, but does not capture effects of variation in the predictor across timepoints within trials. This partitioning of variance in the predictor into a between-trial component whose effect on the signal is modelled, and a within-trial component whose effect on the signal is not, is artificial in many experiment designs, and may yield hard to interpret results.
Consider e.g. the experimental condition considered in Figure 3, taken from Machen et al. 2025 (doi: 10.1101/2025.03.10.642469) in which mice ran down a linear track to collect rewards. In analysing such data, one might want to know how neural activity covaried with the animal's position, but as this variable changes strongly within trial but will have a similar time-course across trials, the cFLMM analysis approach will not work to quantify these effects. This is because variance attributed to position would not capture how neural activity covaried with changes in the animals position within trial, but rather how neural activity covaried with changes in the animals position from trial-to-trial at a given timepoint, which could occur due to e.g. trial-to-trial differences in latency to start moving or running speed. As such, although significant effects of 'position' might be observed, they would not capture covariation between position and neural activity in a straightforwardly interpretable way.
It is therefore not obvious to me that incorporating variables that change within trial into an analysis framework that runs separate regressions at each timepoint in trial aligned data is likely to be widely useful. If scientific questions require understanding how neural activity covaries as a function of variables that change both within and across trials, an alternative approach would be to run a single regression analysis across all timepoints, and capture the extended temporal responses to discrete behavioural events by using temporal basis functions convolved with the event timeseries. This provides a very flexible framework for capturing covariation of neural activity both with variables that change continuously such as position, and discrete behavioural events such as choices or outcomes, while also handling variable event timing from trial-to-trial.
One way that cFLMM is used in the manuscript is to handle variable timing of trial events in trial aligned data. In the Machen et al. data, the time when the animal reaches the reward varies from trial to trial, and this is represented in the cFLMM analysis by a binary variable which changes value at this timepoint. From the resulting beta coefficient timeseries (Figure 3C) it is not straightforward to understand how neural activity changed as the subject approached and then received the reward. A simpler approach to quantify this, which I think would have yielded more interpretable coefficient timeseries would have been to align activity across trials on when the subject obtained the reward, rather than on the start of the trial, allowing e.g. the effect of reward type to be visualised as a function of time relative to reward delivery, and hence to see the differential effects during approach vs consumption. More broadly, handling variable trial timing in analyses like FLMM which use trial aligned data, can be achieved either by separately aligning the data to different trial events of interest or by time warping the signal to align multiple important timepoints across trials. It is not obvious that using cFLMM with binary indicator variables that indicate when task states changed will yield a clearer picture of neural activity than these methods.
It may be that I am missing some key strengths of cFLMM relative to the other approaches I have outlined, or that there are applications where this approach to implementing within-trial variable changes is a natural formalism. However my impression is that while cFLMM represent a technical advance, it is not clear how widely useful the model formalism will be.
Reviewer #3 (Public review):
Summary:
This work is an extension of their previous study (Loewinger et al 2025) describing a statistical framework for the analysis of photometry data using functional linear mixed models with joint confidence intervals, together with an open-source tool implemented in R. The present study extends it by adding the possibility of using 'concurrent' variables (variables that change within a trial) as regressors, for example, capturing the change of speed at each timepoint in the trial. The main claim is that using 'concurrent' regressors can identify associations between signal and behavior that could not be captured by 'non-concurrent' regressors (the value for a regressor on a specific trial is the same for each timepoint), which could lead to misleading conclusions. While the motivation for using time-varying covariates is useful and supported by previous literature (using fixed-effects models, although not cited in this manuscript), the reanalysis of previous studies does not clearly prove the benefit of using concurrent regressors as opposed to non-concurrent, and some of the results are difficult to interpret.
Strengths:
• The motivation for using time-varying covariates is well supported by previous literature using them on fixed-effects models, and here the authors are extending it to mixed-effects models.
• The authors have included this new functionality in their previous open-source R package.
Weaknesses:
• The main weakness of this study is that it is not clear what the conceptual or methodological advance of this work is. As it is written, the manuscript focuses on showing how concurrent regressors offer interpretation advantages over non-concurrent regressors. While the benefit of such time-varying regressors is supported by previous literature (e.g., Engelhard et al., 2020), it is not clear whether the examples provided in the current study clearly support the advantage of one over the other, especially in the reanalysis of Machen et al. (2025), where the choice of regressors is confusing. In this specific example, if the question is about speed and reward type, why variables such as latency to reward or a binary 'reward zone vs corridor' (RZ) regressors are used instead of concurrent velocity (or peak velocity - in the case of the non-concurrent model)? Furthermore, if timing from trial start to reward collection is variable, why not align to reward collection, which would help in the interpretation of the signal and comparison between methods? Furthermore, while for the non-concurrent method, the regressors' coefficients are shown, for the concurrent one, what seems to be plotted are contrasts rather than the coefficients. The authors further acknowledge the interpretational difficulties of their analysis.
• Because the relation between behavioral variables and neuronal signal is not instantaneous, previous literature using fixed effects uses, for example, different temporal lags, splines, and convolutional kernels; however, these are not discussed in the manuscript.
• From the methods, it seems that in the concurrent version of fastFMM, both concurrent and non-concurrent regressors can be included, but this is not discussed in the manuscript.
• The methodological advance is not clearly stated, apart from inputting into fastFMM a 3D matrix of regressors x trial x timepoint, instead of a 2D matrix of regressors x trial.
• This manuscript is neither a clear demonstration of the need for concurrent variables, nor a 'tutorial' of how to use fastFMM with the added extension.