Peer review process
Revised: This Reviewed Preprint has been revised by the authors in response to the previous round of peer review; the eLife assessment and the public reviews have been updated where necessary by the editors and peer reviewers.
Read more about eLife’s peer review process.Editors
- Reviewing EditorRedmond O'ConnellTrinity College Dublin, Dublin, Ireland
- Senior EditorFloris de LangeDonders Institute for Brain, Cognition and Behaviour, Nijmegen, Netherlands
Reviewer #1 (Public Review):
It is known that aberrant habit formation is a characteristic of obsessive-compulsive disorder (OCD). Habits can be defined according to the following features (Balleine and Dezfouli, 2019): rapid execution, invariant response topography, action 'chunking' and resistance to devaluation.
The extent to which OCD behavior is derived from enhanced habit formation relative to deficits in goal-directed behavior is a topic of debate in the current literature. This study examined habit-learning specifically (cf. deficits in goal-directed behavior) by regularly presenting, via smartphone, sequential learning tasks to patients with OCD and healthy controls. Participants engaged in the tasks every day over the course of a month. Automaticity, including the extent to which individual actions in the sequence become part of a unified 'chunk', was an important outcome variable. Following the 30 days of training, in-laboratory tasks were then administered to examine 1) if performing the learned sequences themselves had become rewarding 2) differences in goal-directed vs. habitual behavior.
Several hypotheses were tested, including:
Patients would have impaired procedural learning vs. healthy volunteers (this was not supported, possibly because there were fewer demands on memory in the task used here)
Once the task had been learned, patients would display automaticity faster (unexpectedly, patients were slower to display automaticity)
Habits would form faster under a continuous (vs. variable) reinforcement schedule
Exploratory analyses were also conducted: an interesting finding was that OCD patients with higher self-reported symptoms voluntarily completed more sessions with the habit-training app and reported a reduction in symptoms.
Strengths
This paper is well situated theoretically within the habit learning/OCD literature.
Daily training in a motor-learning task, delivered via smartphone, was innovative, ecologically valid and more likely to assay habitual behaviors specifically. Daily training is also more similar to studies with non-humans, making a better link with that literature. The use of a sequential-learning task (cf. tasks that require a single response) is also more ecologically valid.
The in-laboratory tests (after the 1 month of training) allowed the researchers to test if the OCD group preferred familiar, but more difficult, sequences over newer, simpler sequences.
Weaknesses
The authors were not able to test one criterion of habits, namely resistance to devaluation, due to the nature of the task.
The sample size was relatively small. Some potentially interesting individual differences within the OCD group could have been examined more thoroughly with a bigger sample (e.g., preference for familiar sequences). A larger sample may have allowed the statistical testing of any effects due to medication status.
The authors achieved their aims in that two groups of participants (patients with OCD and controls) engaged with the task over the course of 30 days. The repeated nature of the task meant that 'overtraining' was almost certainly established, and automaticity was demonstrated. This allowed the authors to test their hypotheses about habit learning. The results are supportive of the author's conclusions.
This article is likely to be impactful -- the delivery of a task across 30 days to a patient group is innovative and represents a new approach for the study of habit learning that is superior to an in-laboratory approach.
An interesting aspect of this manuscript is that it prompts a comparison with previous studies of goal-directed/habitual responding in OCD that used devaluation protocols, and which may have had their effects due to deficits in goal-directed behavior and not enhanced habit learning per se.
Reviewer #2 (Public Review):
I would like to express my appreciation for the authors' dedication to revising the manuscript. It is evident that they have thoughtfully addressed numerous concerns I previously raised, significantly contributing to the overall improvement of the manuscript.
My primary concern regarding the authors' framing of their findings within the realm of habitual and goal-directed action control persists. I will try explain my point of view and perhaps clarify my concerns.
While acknowledging the historical tendency to equate procedural learning with habits, I believe a consensus has gradually emerged among scientists, recognizing a meaningful distinction between habits and skills or procedural learning. I think this distinction is crucial for a comprehensive understanding of human action control. While these constructs share similarities, they should not be used interchangeably. Procedural learning and motor skills can manifest either through intentional and planned actions (i.e., goal-directed) or autonomously and involuntarily (habitual responses).
Watson et al. (2022) aptly detailed my concerns in the following statements: "Defining habits as fluid and quickly deployed movement sequences overlaps with definitions of skills and procedural learning, which are seen by associative learning theorists as different behaviours and fields of research, distinct from habits."
"...the risk of calling any fluid behavioural repertoire 'habit' is that clarity on what exactly is under investigation and what associative structure underpins the behaviour may be lost."
I strongly encourage the authors, at the very least, to consider Watson et al.'s (2022) suggestion: "Clearer terminology as to the type of habit under investigation may be required by researchers to ensure that others can assess at a glance what exactly is under investigation (e.g., devaluation-insensitive habits vs. procedural habits)", and to refine their terminology accordingly (to make this distinction clear). I believe adopting clearer terminology in these respects would enhance the positioning of this work within the relevant knowledge landscape and facilitate future investigations in the field.
Regarding the authors' use of Balleine and Dezfouli's (2018) criteria to frame recorded behavior as habitual, as well as to acknowledgment the study's limitations, it's important to highlight that while the authors labeled the fourth criterion (which they were not fulfilling) as "resistance to devaluation," Balleine and Dezfouli define it as "insensitive to changes in their relationship to their individual consequences and the value of those consequences." In my understanding, this definition is potentially aligned with the authors' re-evaluation test, namely, it is conceptually adequate for evaluating the fourth criterion (which is the most accepted in the field and probably the one that differentiate habits from skills). Notably, during this test, participants exhibited goal-directed behavior.
The authors characterized this test as possibly assessing arbitration between goal-directed and habitual behavior, stating that participants in both groups "demonstrated the ability to arbitrate between prior automatic actions and new goal-directed ones." In my perspective, there is no justification for calling it a test of arbitration. Notably, the authors inferred that participants were habitual before the test based on some criteria, but then transitioned to goal-directed behavior based on a different criterion. While I agree with the authors' comment that: "Whether the initiation of the trained motor sequences in experiment 3 (arbitration) is underpinned by an action-outcome association (or not) has no bearing on whether those sequences were under stimulus-response control after training (experiment 1)." they implicitly assert a shift from habit to goal-directed behavior without providing evidence that relies on the same probed mechanism.
Therefore, I think it would be more cautious to refer to this test as solely an outcome revaluation test. Again, the results of this test, if anything, provide evidence that the fourth criterion was tested but not met, suggesting participants have not become habitual (or at least undermines this option).
Author Response
The following is the authors’ response to the original reviews.
Public Reviews:
Reviewer #1 (Public Review):
Strengths
This paper is well situated theoretically within the habit learning/OCD literature.
Daily training in a motor-learning task, delivered via smartphone, was innovative, ecologically valid and more likely to assay habitual behaviors specifically. Daily training is also more similar to studies with non-humans, making a better link with that literature. The use of a sequential-learning task (cf. tasks that require a single response) is also more ecologically valid.
The in-laboratory tests (after the 1 month of training) allowed the researchers to test if the OCD group preferred familiar, but more difficult, sequences over newer, simpler sequences.
The authors achieved their aims in that two groups of participants (patients with OCD and controls) engaged with the task over the course of 30 days. The repeated nature of the task meant that 'overtraining' was almost certainly established, and automaticity was demonstrated. This allowed the authors to test their hypotheses about habit learning. The results are supportive of the authors' conclusions.
Response: We truly appreciate the positive assessment of referee 1, particularly the consideration that our study is theoretically strong and that ‘the results are supportive of the authors' conclusions’. This is an important external endorsement of our conclusions, contrasting somewhat with the views of referee 2.
Weaknesses
The sample size was relatively small. Some potentially interesting individual differences within the OCD group could have been examined more thoroughly with a bigger sample (e.g., preference for familiar sequences). A larger sample may have allowed the statistical testing of any effects due to medication status. The authors were not able to test one criterion of habits, namely resistance to devaluation, due to the nature of the task
Response: We agree with the reviewer that the proof of principle established in our study opens new avenues for research into the psychological and behavioral determinants of the heterogeneity of this clinical population. However, considering the study timeline and the pandemic constraints, a bigger sample was not possible. Our sample can indeed be considered small if one compares it with current online studies, which do not require in-person/laboratory testing, thus being much easier to recruit and conduct. However, given the nature of our protocol (with 2 demanding test phases, 1-month engagement per participant and the inclusion of OCD patients without comorbidities only) and the fact that this study also involved laboratory testing, we consider our sample size reasonable and comparable to other laboratory studies (typically comprising on average between 30-50 participants in each group).
This article is likely to be impactful -- the delivery of a task across 30 days to a patient group is innovative and represents a new approach for the study of habit learning that is superior to an inlaboratory approach.
An interesting aspect of this manuscript is that it prompts a comparison with previous studies of goal-directed/habitual responding in OCD that used devaluation protocols, and which may have had their effects due to deficits in goal-directed behavior and not enhanced habit learning per se.
Response: Thank you for acknowledging the impact of our study, in particular the unique ability of our task to interrogate the habit system.
Reviewer #2 (Public Review):
In this study, the researchers employed a recently developed smartphone application to provide 30 days of training on action sequences to both OCD patients and healthy volunteers. The study tested learning and automaticity-related measures and investigated the effects of several factors on these measures. Upon training completion, the researchers conducted two preference tests comparing a learned and unlearned action sequences under different conditions. While the study provides some interesting findings, I have a few substantial concerns:
- Throughout the entire paper, the authors' interpretations and claims revolve around the domain of habits and goal-directed behavior, despite the methods and evidence clearly focusing on motor sequence learning/procedural learning/skill learning. There is no evidence to support this framing and interpretation and thus I find them overreaching and hyperbolic, and I think they should be avoided. Although skills and habits share many characteristics, they are meaningfully distinguishable and should not be conflated or mixed up. Furthermore, if anything, the evidence in this study suggests that participants attained procedural learning, but these actions did not become habitual, as they remained deliberate actions that were not chosen to be performed when they were not in line with participants' current goals.
Response: We acknowledge that the research on habit learning is a topic of current controversy, especially when it comes to how to induce and measure habits in humans. Therefore, within this context referee’s 2 criticism could be expected. Across distinct fields of research, different methodologies have been used to measure habits, which represent relatively stereotyped and autonomous behavioral sequences enacted in response to a specific stimulus without consideration, at the time of initiation of the sequence, of the value of the outcome or any representation of the relationship that exists between the response and the outcome. Hence these are stimulus-bound responses which may or may not require the implementation of a skill during subsequent performance. Behavioral neuroscientists define habits similarly, as stimulus-response associations which are independent of reward or outcome, and use devaluation or contingency degradation strategies to probe habits (Dickinson and Weiskrantz, 1985; Tricomi et al., 2009). Others conceptualize habits as a form of procedural memory, along with skills, and use motor sequence learning paradigms to investigate and dissect different components of habit learning such as action selection, execution and consolidation (Abrahamse et al., 2013; Doyon et al., 2003; Squire et al., 1993). It is also generally agreed that the autonomous nature of habits and the fluid proficiency of skills are both usually achieved with many hours of training or practice, respectively (Haith and Krakauer, 2018).
We consider that Balleine and Dezfouli (2019) made an excellent attempt to bring all these different criteria within a single framework, which we have followed. We also consider that our discussion in fact followed a rather cautious approach to interpretation solely in terms of goaldirected versus habitual control.
Referee 2 does not actually specify criteria by which they define habits and skills, except for asserting that skilled behavior is goal-directed, without mentioning what the actual goal of the implantation of such skill is in the present study: the fulfillment of a habit? We assume that their definition of habit hinges on the effects of devaluation, as a single criterion of habit, but which according to Balleine and Dezfouli (2019) is only 1 of their 4 listed criteria. We carefully addressed this specific criterion in our manuscript: “We were not, however, able to test the fourth criterion, of resistance to devaluation. Therefore, we are unable to firmly conclude that the action sequences are habits rather than, for example, goal-directed skills. Regardless of whether the trained action sequences can be defined as habits or goal-directed motor skills, it has to be considered…”. Therefore, we took due care in our conclusions concerning habits and thus found the referee’s comment misleading and unfair.
We note that our trained motor sequences did in fact fulfil the other 3 criteria listed by Balleine and Dezfouli (2019), unlike many studies employing only devaluation (e.g. Tricomi et al 2009; Gillan et al 2011). Moreover, we cited a recent study using very similar methodology where the devaluation test was applied and shown to support the habit hypothesis (Gera et al., 2022).
Whether the initiation of the trained motor sequences in experiment 3 (arbitration) is underpinned by an action-outcome association (or not) has no bearing on whether those sequences were under stimulus-response control after training (experiment 1). Transitions between habitual and goal-directed control over behavior are quite well established in the experimental literature, especially when choice opportunities become available (Bouton et al (2021), Frölich et al (2023), or a new goal-directed schemata is recruited to fulfill a habit (Fouyssac et al, 2022). This switching between habits and goal-directed responding may reflect the coordination of these systems in producing effective behavior in the real world.
Fouyssac M, Peña-Oliver Y, Puaud M, Lim NTY, Giuliano C, Everitt BJ, Belin D. (2021).Negative Urgency Exacerbates Relapse to Cocaine Seeking After Abstinence. Biological Psychiatry. doi: 10.1016/j.biopsych.2021.10.009
Frölich S, Esmeyer M, Endrass T, Smolka MN and Kiebel SJ (2023) Interaction between habits as action sequences and goal-directed behavior under time pressure. Front. Neurosci. 16:996957. doi: 10.3389/fnins.2022.996957
Bouton ME. 2021. Context, attention, and the switch between habit and goal-direction in behavior. Learn Behav 49:349– 362. doi:10.3758/s13420-021-00488-z
- Some methodological aspects need more detail and clarification.
- There are concerns regarding some of the analyses, which require addressing.
Response: We thank referee 2 for their detailed review of the methods and analyses of our study and for the helpful feedback, which clearly helps improve our manuscript. We will clarify the methodological aspects in detail and conduct the suggested analysis. Please see below our answers to the specific points raised.
Introduction:
- It is stated that "extensive training of sequential actions would more rapidly engage the 'habit system' as compared to single-action instrumental learning". In an attempt to describe the rationale for this statement the authors describe the concept of action chunking, its benefits and relevance to habits but there is no explanation for why sequential actions would engage the habit system more rapidly than a single-action. Clarifying this would be helpful.
Response: We agree that there is no evidence that action sequences become habitual more readily than single actions, although action sequences clearly allow ‘chunking’ and thus likely engage neural networks including the putamen which are implicated in habit learning as well as skill. In our revised manuscript we will instead state: “we have recently postulated that extensive training of sequential actions could be a means for rapidly engaging the ‘habit system’ (Robbins et al., 2019)]”
DONE in page 2
- In the Hypothesis section the authors state: “we expected that OCD patients... show enhanced habit attainment through a greater preference for performing familiar app sequences when given the choice to select any other, easier sequence”. I find it particularly difficult to interpret preference for familiar sequences as enhanced habit attainment.
Response: We agree that choice of the familiar response sequence should not be a necessary criterion for habitual control although choice for a familiar sequence is, in fact, not inconsistent with this hypothesis. In a recent study, Zmigrod et al (2022) found that 'aversion to novelty' was a relevant factor in the subjective measurement of habitual tendencies. It should also be noted that this preference was present in patients with OCD. If one assumes instead, like the referee, that the familiar sequence is goal-directed, then it contravenes the well-known 'egodystonia' of OCD which suggests that such tendencies are not goal-directed.
To clarify our hypothesis, we will amend the sentence to the following: “Finally, we expected that OCD patients would generally report greater habits, as well as attribute higher intrinsic value to the familiar app sequences manifested by a greater preference for performing them when given the choice to select any other, easier sequence”.
DONE in page 5. We have now rephrased it: “Additionally, we hypothesized that OCD patients would generally display stronger habits and assign greater intrinsic value to the familiar app sequences, evidenced by a marked preference for executing them even when presented with a simpler alternative sequence.”
A few notes on the task description and other task components:
- It would be useful to give more details on the task. This includes more details on the time/condition of the gradual removal of visual and auditory stimuli and also on the within practice dynamic structure (i.e., different levels appear in the video).
Response: These details will be included in the revised manuscript. Thank you for pointing out the need for further clarification of the task design.
Done in page 7
- Some more information on engagement-related exclusion criteria would be useful (what happened if participants did not use the app for more than one day, how many times were allowed to skip a day etc.).
Response: This additional information will be added to the revised manuscript. If participants omitted to train for more than 2 days, the researcher would send a reminder to the participant to request to catch up. If the participant would not react accordingly and a third day would be skipped, then the researcher would call to understand the reasons for the lack of engagement and gauge motivation. The participant would be excluded if more than 5 sequential days of training were missed. Only 2 participants were excluded given their lack of engagement.
Done in page 8
- According to the (very useful) video demonstrating the task and the paper describing the task in detail (Banca et al., 2020), the task seems to include other relevant components that were not mentioned in this paper. I refer to the daily speed test, the daily random switch test, and daily ratings of each sequence's enjoyment and confidence of knowledge.
If these components were not included in this procedure, then the deviations from the procedure described in the video and Banca al. (2020) should be explicitly mentioned. If these components were included, at least some of them may be relevant, at least in part, to automaticity, habitual action control, formulation of participants' enjoyment from the app etc. I think these components should be mentioned and analyzed (or at least provide an explanation for why it has been decided not to analyze them).
This is also true for the reward removal (extinction) from the 21st day onwards which is potentially of particular relevance for the research questions.
Response: The task procedure was indeed the same as detailed in Banca et al., 2020. We did not include these extra components in this current manuscript for reasons of succinctness and because the manuscript was already rather longer than a common research article, given that we present three different, though highly inter-dependent, experiments in order to answer key interrelated questions in an optimal manner. However, since referee 2 considers this additional analysis to be important, we will be happy to include it in the supplementary material of the revised manuscript.
These additional components of the task as well as the respective analysis are now described in the Supplementary Materials.
Training engagement analysis:
- I find referring to the number of trials including successful and unsuccessful trials as representing participants "commitment to training" (e.g. in Figure legend 2b) potentially inadequate. Given that participants need at least 20 successful trials to complete each practice, more errors would lead to more trials. Therefore, I think this measure may mostly represent weaker performance (of the OCD patients as shown in Figure 2b). Therefore, I find the number of performed practice runs, as used in Figure 2a (which should be perfectly aligned with the number of successful trials), a "clean" and proper measure of engagement/commitment to training.
Response: We acknowledge referee’s concern on this matter and agree to replace the y-axis variable of Figure 2b to the number of performed practices (thus aligning with Figure 2a). This amendment will remove any potential effect of weaker performance on the engagement measurement and will provide clearer results.
We have now decided to remove this figure as it does not add much to figure 2a. Instead, we replaced figure 2b and 2c for new plots, following new analysis linked to the next reviewer request (point 10)
- Also, to provide stronger support for the claim about different diurnal training patterns (as presented in Figure 2c and the text) between patients and healthy individuals, it would be beneficial to conduct a statistical test comparing the two distributions. If the results of this test are not significant, I suggest emphasizing that this is a descriptive finding.
Response: Done, see revised Figure 2b and 2c. We have assessed the diurnal training patterns within each group using circular statistics, followed by independent-sample statistical testing of those circular distributions with the Watson’s U2 test ( Landler et al., 2021). While OCD participants have a group effect of practice with a significant peak at ~18:00, and HV participants have an earlier significant peak at ~15:00, the Watson’s U test did not find statistical betweengroup differences.
- Landler L, Ruxton GD, Malkemper EP. Advice on comparing two independent samples of circular data in biology. Scientific reports. 2021 Oct 13;11(1):20337.
Learning results:
- When describing the Learning results (p10) I think it would be useful to provide the descriptive stats for the MT0 parameter (as done above for the other two parameters).
Response: Thank you for pointing this out. The descriptive stats for MT0 will be added to the revised version of the manuscript.
Done page 11
- Sensitivity of sequence duration and IKI consistency (C) to reward:
I think it is important to add details on how incorrect trials were handled when calculating ∆MT (or C) and ∆R, specifically in cases where the trial preceding a successful trial was unsuccessful. If incorrect trials were simply ignored, this may not adequately represent trial-by-trial changes, particularly when testing the effect of a trial's outcome on performance change in the next trial.
Response: This is an important question. Our analysis protocol was designed to ensure that incorrect trials do not contaminate or confound the results. To estimate the trial-to-trial difference in ∆MT (or C) and ∆R, we exclusively included pairs of contiguous trials where participants achieved correct performance and received feedback scores for both trials. For example, if a participant made a performance error on trial 23, we did not include ∆R or ∆MT estimates for the pairs of trials 23-22 and 24-23. Instead of excluding incorrect trials from our analyses, we retained them in our time series but assigned them a NaN (not a number) value in Matlab. As a result, ∆R and ∆MT was not defined for those two pairs of trials. Similarly for C. This approach ensured that our analyses are not confounded by incremental or decremental feedback scores between noncontiguous trials. In the past, when assessing the timing of correct actions during skilled sequence performance, we also considered events that were preceded and followed by correct actions. This excluded effects such as post-error slowing from contaminating our results (Herrojo Ruiz et al., 2009, 2019). Therefore, we do not believe that any further reanalysis is required.
Ruiz MH, Jabusch HC, Altenmüller E. Detecting wrong notes in advance: neuronal correlates of error monitoring in pianists. Cerebral cortex. 2009 Nov 1;19(11):2625-39.
Bury G, García-Huéscar M, Bhattacharya J, Ruiz MH. Cardiac afferent activity modulates early neural signature of error detection during skilled performance. NeuroImage. 2019 Oct 1;199:704-17.
- I have a serious concern with respect to how the sensitivity of sequence duration to reward is framed and analyzed. Since reward is proportional to performance, a reduction in reward essentially indicates a trial with poor performance, and thus even regression to the mean (along with a floor effect in performance [asymptote]) could explain the observed effects. It is possible that even occasional poor performance could lead to a participant demonstrating this effect, potentially regardless of the reward. Accordingly, the reduced improvement in performance following a reward decrease as a function of training length described in Figure 5b legend may reflect training-induced increased performance that leaves less room for improvement after poor trials, which are no longer as poor as before. To address this concern, controlling for performance (e.g., by taking into consideration the baseline MT for the previous trial) may be helpful. If the authors can conduct such an analysis and still show the observed effect, it would establish the validity of their findings."
Response: Thank you for raising this point. This has been done, see updated Figures 5 and 6. After normalizing the ∆MT(n+1) := MT(n+1) – MT(n) difference values by dividing them with the baseline MT(n) at trial n, we obtain the same results. Similar results are also obtained for IKI consistency (C).
See below our initial response from June 2023.
Thank you for raising this point. Figure 5b illustrates two distinct effects of reward changes on behavioral adaptation, which are expected based on previous research.
I. Practice effects: Firstly, we observe that as participants progress across bins of practice, the degree of improvement in behavior (reflected by faster movement time, MT) following a decrease in reward (∆R−) diminishes, consistent with our expectations based on previous work. Conversely, we found that ∆MT does not change across bins of practices following an increase in reward (∆R+).
We appreciate the reviewer’s suggestion regarding controlling for the reference movement time (MT) in the previous trial when examining the practice effect in the p(∆T|∆R−) and p(∆T|∆R+) distributions. In the revised manuscript, we will conduct the proposed control analysis to better understand whether the sensitivity of MT to score decrements changes across practice when normalising MT to the reference level on each trial. But see below for a preliminary control analysis.
II. Asymmetry of the effect of ∆R− and ∆R+ on performance: Figure 5b also depicts the distinct impact of score increments and decrements on behavioural changes. When aggregating data across practice bins, we consistently observed that the centre of the p(∆T|∆R−) distribution was smaller (more negative) than that of p(∆T|∆R+). This suggests that participants exhibited a greater acceleration following a drop in scores compared to a relative score increase, and this effect persisted throughout the practice sessions. Importantly, this enhanced sensitivity to losses or negative feedback (or relative drops in scores) aligns with previous research findings (Galea et al., 2015; Pekny et al., 2014; van Mastrigt et al., 2020).
We have conducted a preliminary control analysis to exclude the potential impact that reference movement time (MT) values could have on our analysis. We have assessed the asymmetry between behavioural responses to ∆R− and ∆R+ using the following analysis: We estimated the proportion of trials in which participants exhibited speed-up (∆T < 0) or slow-down (∆T > 0) behaviour following ∆R− and ∆R+ across different practice bins (bins 1 to 4). By discretising the series of behavioural changes (∆T) into binary values (+1 for slowing down, -1 for speeding up), we can assess the type of changes (speed-up, slow-down) without the absolute ∆T or T values contributing to our results. We obtained several key findings:
• Consistent with expectations (sanity check), participants exhibited more instances of speeding up than slowing down across all reward conditions.
• Participants demonstrated a higher frequency of speeding up following ∆R− compared to ∆R+, and this asymmetry persisted throughout the practice sessions (greater proportion of -1 events than +1 events). 53% events were speed-up events in the in the p(∆T|∆R+) distribution for the first bin of practices, and 55% for the last bin. Regarding p(∆T|∆R-), there were 63% speed-up events throughout each bin of practices, with this proportion exhibiting no change over time.
• Accordingly, the asymmetry of reward changes on behavioural adaptations, as revealed by this analysis, remained consistent across the practice bins.
Thus, these preliminary findings provide an initial response to referee 2 and offer valuable insights into the asymmetrical effects of positive/negative reward changes on behavioural adaptations. We plan to include these results in the revised manuscript, as well as the full control analysis suggested by the referee. We will further expand upon their interpretation and implications.
- Another way to support the claim of reward change directionality effects on performance (rather than performance on performance), at least to some extent, would be to analyze the data from the last 10 days of the training, during which no rewards were given (pretending for analysis purposes that the reward was calculated and presented to participants). If the effect persists, it is less unlikely that the effect in question can be attributed to the reward dynamics.
Response: The reviewer’s concern is addressed in the previous quesQon. Also, this analysis would not be possible because our Gaussian fit analyses use the Qme series of conQnuous reward scores, in which ∆R− or ∆R+ are embedded. These events cannot be analyzed once reward feedback is removed because we do not have behavioral events following ∆R− or ∆R+ anymore.
Done
- This concern is also relevant and should be considered with respect to the sensitivity of IKI consistency (C) to reward. While the relationship between previous reward/performance and future performance in terms of C is of a different structure, the similar potential confounding effects could still be present.
Response: We will conduct this analysis for the revised manuscript, similarly to the control analysis suggested by referee 2 on MT. Our preliminary control analysis, as explained above, suggests that the fundamental asymmetry in the effect of ∆R+ and ∆R+ on behavioral changes persists when excluding the impact of reference performance values in our Gaussian fit analysis.
Done. See updated Figure 6. The results are very similar once we normalize the IKI consistency index C with the IKI of the baseline performance at trial n.
- Another related question (which is also of general interest) is whether the preferred app sequence (as indicated by the participants for Phase B) was consistently the one that yielded more reward? Was the continuous sequence the preferred one? This might tell something about the effectiveness of the reward in the task.
Response: We have now conducted this analysis. There is in fact no evidence to conclude that the continuously rewarded sequence was the preferred one. The result shows that 54.5% of HV and 29% of the OCD sample considered the continuous sequence to be their preferred one, a nonstatistically significant difference. Note that this preference may not necessarily be linked simply to programmed reward. The overall preference may be influenced by many other factors, such as, for example, the aesthetic appeal of particular combinations of finger movements.
Regarding both experiments 2 and 3:
- The change in context in experiment 2 and 3 is substantial and include many different components. These changes should be mentioned in more detail in the Results section before describing the results of experiments 2 and 3.
Response: Following referee’s advice, we will move these details (currently written in the Methods section) to the Results section, when we introduce Phase B and before describing the results of experiments 2 and 3.
Done in page 21
Experiment 2:
- In Experiment 2, the authors sometimes refer to the "explicit preference task" as testing for habitual and goal-seeking sequences. However, I do not think there is any justification for interpreting it as such. The other framings used by the authors - testing whether trained action sequences gain intrinsic/rewarding properties or value, and preference for familiar versus novel action sequences - are more suitable and justified. In support of the point I raised here, assigning intrinsic rewarding properties to the learned sequences and thereby preferring these sequences can be conceptually aligned with goal-directed behavior just as much as it could be with habit.
Response: We clearly defined the theoretical framing of experiment 2 as a test of whether trained action sequences gain intrinsic value and we are pleased to hear that the referee agrees with this framing. If the referee is referring to the paragraph below (in the Discussion), we actually do acknowledge within this paragraph that a preference for the trained sequences can either be conceptually aligned with a habit OR a goal-directed behavior.
“On the other hand, we are describing here two potential sources of evidence in favor of enhanced habit formation in OCD. First, OCD patients show a bias towards the previously trained, apparently disadvantageous, action sequences. In terms of the discussion above, this could possibly be reinterpreted as a narrowing of goals in OCD (Robbins et al., 2019) underlying compulsive behavior, in favor of its intrinsic outcomes”
This narrowing of goals model of OCD refers to a hypothetically transiQonal stage of compulsion development driven by behavior having an abnormally strong, goal-directed nature, typically linked to specific values and concerns.
If the referee is referring to the penulQmate sentence of hypothesis secQon, this has been amended in response to Q5. We cannot find any other possible instances in this manuscript stating that experiment 2 is a test of habitual or goal-directed behavior.
Experiment 3:
- Similar to Experiment 2, I find the framing of arbitration between goal-directed/habitual behavior in Experiment 3 inadequate and unjustified. The results of the experiment suggest that participants were primarily goal-directed and there is no evidence to support the idea that this reevaluation led participants to switch from habitual to goal-directed behavior.
Also, given the explicit choice of the sequence to perform participants had to make prior to performing it, it is reasonable to assume that this experiment mainly tested bias towards familiar sequence/stimulus and/or towards intrinsic reward associated with the sequence in value-based decision making.
Response: This comment is aligned with (and follows) the referee’s criticism of experiment 1 not achieving automatic and habitual actions. We have addressed this matter above, in response 1 to Referee 2.
Mobile-app performance effect on symptomatology: exploratory analyses:
- Maybe it would be worth testing if the patients with improved symptomatology (that contribute some of their symptom improvement to the app) also chose to play more during the training stage.
Response: We have conducted analysis to address this relevant question. There is no correlation between the YBOCS score change and the number of total practices, meaning that the patients who improved symptomatology post training did not necessarily chose to play the app more during the training stage (rs = 0.25, p = 0.15). Additionally, we have statistically compared the improvers (patients with reduced YBOCS scores post-training) and the non-improvers (patients with unchanged or increased YBOCS scores post-training) in their number of app completed practices during the training phase and no differences were observed (U = 169, p = 0.19).
The result from the correlational analysis has been added to the revised manuscript (page 28).
Discussion:
- Based on my earlier comments highlighting the inadequacy and mis-framing of the work in terms of habit and goal-directed behavior, I suggest that the discussion section be substantially revised to reflect these concerns.
Response: We do not agree that the work is either "inadequate or mis-framed" and will not therefore be substantially revising the Discussion. We will however clarify further the interpretation we have made and make explicit the alternative viewpoint of the referee. For example, we will retitle experiment 3 as “Re-evaluation of the learned action sequence: possible test of goal/habit arbitration” to acknowledge the referee’s viewpoint as well as our own interpretation.
Done
- In the sentence "Nevertheless, OCD patients disadvantageously preferred the previously trained/familiar action sequence under certain conditions" the term "disadvantageously" is not necessarily accurate. While there was potentially more effort required, considering the possible presence of intrinsic reward and chunking, this preference may not necessarily be disadvantageous. Therefore, a more cautious and accurate phrasing that better reflects the associated results would be useful.
Response: We recognize that the term "disadvantageously" may be semantically ambiguous for some readers and therefore we will remove it.
Done
Materials and Methods:
- The authors mention: "The novel sequence (in condition 3) was a 6-move sequence of similar complexity and difficulty as the app sequences, but only learned on the day, before starting this task (therefore, not overtrained)." - for the sake of completeness, more details on the pre-training done on that day would be useful.
Response: Details of the learning procedure of the novel sequence (in condition 3, experiment 3) will be provided in the methods of the revised version of the manuscript.
Done in page 40
Minor comments:
- In the section discussing the sensitivity of sequence duration to reward, the authors state that they only analyzed continuous reward trials because "a larger number of trials in each subsample were available to fit the Gaussian distributions, due to feedback being provided on all trials." However, feedback was also provided on all trials in the variable reward condition, even though the reward was not necessarily aligned with participants' performance. Therefore, it may be beneficial to rephrase this statement for clarity.
Response: We will follow this referee’s advice and will rephrase the sentence for clarity.
Done. See page 16.
- With regard to experiment 2 (Preference for familiar versus novel action sequences) in the following statement "A positive correlation between COHS and the app sequence choice (Pearson r = 0.36, p = 0.005) further showed that those participants with greater habitual tendencies had a greater propensity to prefer the trained app sequence under this condition." I find the use of the word "further" here potentially misleading.
Response: The word "further" will be removed.
Done
Reviewer #1 (Recommendations For The Authors):
This is a very interesting manuscript, which was a pleasure to review. I have some minor comments you may wish to consider.
- I believe that it is possible to include videos as elements in eLife articles - please consider if you can do this to demonstrate the action sequence on the smartphone. I followed the YouTube video, and it was very helpful to see exactly what participants did, but it would be better to attach the video directly, if possible.
Response: This is a great idea and we will definitely attach our video demonstrating the task to the revised manuscript (Version of Record) if the eLife editors allow.
We ask permission to the editor to add the video
- The abstract states that the study uses a "novel smartphone app" but is the same one as described in Banca et al. Suggest writing simply "smartphone app".
Response: We will remove the word novel.
Done
- Some of the hypotheses described in the second half of the Hypothesis section could be stated more explicitly. For example: "We also hypothesized that the acquisition of learning and automaticity would differ between the two action sequences based on their associated rewarded schedule (continuous versus variable) and reward valence (positive or negative)." The subsequent sentence explains the prediction for the schedule but what is the hypothesized direction for reward valence? More detail is subsequently given on p. 14, Results, but it would be better to bring these details up to the Introduction. "We additionally examined differential effects of positive and negative feedback changes on performance to build on previous work demonstrating enhanced sensitivity to negative feedback in patients with OCD (Apergis-Schoute et al 2023, Becker et al., 2014; Kanen et al., 2019)." In general, the second part of the Hypothesis section is a bit dense, sometimes with two predictions per sentence. It could be useful for the reader if hypotheses were enumerated and/or if a distinction was made among the hypotheses with respect to their importance.
We fully revised the hypothesis section, on page 5, following this reviewer’s suggestion. We think this section is much clearer now, in our revised manuscript.
Response: Thank you for pointing out the need for clarity in our hypothesis section. This is a very important point and we will carefully rewrite our hypothesis in the revised manuscript to make them as clear as possible.
- Did medication status correlate with symptom severity in the OCD group (e.g., higher symptoms for the 6 participants on SSRI+antipsychotics?). Could this, or SSRI-only status, have impacted results in any way? I appreciate that there is no way to test medication status statistically but readers may be interested in your thoughts on this aspect.
Response: We have now conducted exploratory analysis to assess the potential effect of medication in the following output measures: app engagement (as measured by completed practices), explicit preference and YBOCS change post-training. The patients who were on combined therapy (SSRIs + antipsychotic) did not perform significantly different in these measures as compared to the remaining patients and no other effects of interest were observed. Their symptomatology was indeed slightly more severe but not statistically significant [Y-BOCS combined = 26.2 (6.5); Y-BOCS SSRI only = 23.8 (6.1); Y-BOCS No Med = 23.8 (2.2), mean(std)]. Only one patient showed symptom improvement after the app training, another became worse and the remaining patients on combined therapy remain stable during the month.
Palminteri et al (2011) found that unmedicated OCD patients exhibited instrumental learning deficits, which were fully alleviated with SSRI treatment. Therefore, it is possible that the SSRI medication (present in our sample) may have reduced habit formation and facilitated behavioral arbitration. However, since the effect goes against the habit hypothesis, it has is unlikely that it has confounded our measure of automaticity. If anything, medication rendered experiment 2 and 3 more goal-oriented. We agree that further studies are warranted to address the effect of SSRIs on these measures.
- You could explain earlier why devaluation could not be tested here (it is only explained in the Limitations section near the end)
Response: The revised manuscript will be amended to account for this note.
Done in page 25.
- Capitalize 'makey-makey', I didn't realize there was a product called Makey Makey until I Googled it.
Response: Sure. We will capitalize 'Makey-Makey'. Thank you for pointing this out!
Done
Reviewer #2 (Recommendations For The Authors):
Recommendations for the authors (ordered by the paper sections):
In the introduction
- regarding this part "We used a period of 1-month's training to enable effective consolidation, required for habitual action control or skill retention to occur. This acknowledged previous studies showing that practice alone is insufficient for habit development as it also requires off-line consolidation computations, through longer periods of time (de Wit et al., 2018) and sleep (Nusbaum et al., 2018; Walker et al., 2003)." I advise the authors to re-check whether what is attributed here to de Wit et al. (2018) is indeed justified (if I remember correctly they have not mentioned anything about off-line consolidation computations).
Response: When we revise the manuscript, we will remove the de Wit et al. (2018) citation from this sentence.
Done
in the Outline paragraph
- it stated: "We continuously collected data online, in real time, thus enabling measurements of procedural learning as well as automaticity development." I think this wording implies that the fact that the data was collected online in real time was advantageous in that it enabled to assess measurements of procedural learning and automaticity development, which in my understanding is not the case.
Response: To make this sentence clearer, we will change it to the following: ‘We continuously collected data online, to monitor engagement and performance in real time and to enable acquisition of sufficient data to analyze, à posteriori, procedural learning and automaticity development’.
Done in page 4: ‘We collected data online continuously to monitor engagement and performance in real-time. This approach ensured we acquired sufficient data for subsequent analysis of procedural learning and automaticity development’.
- In the final sentence of this paragraph "or and" should be changed to "or/end".
Response: This was a typo. The word ‘and’ will be removed.
Done
- In Figure 1c - Note that in the figure legend it says "Each sequence comprises 3 single press moves, 2 two-finger moves..." whereas in the example shown in the figure it's the other way around (2 single press moves and 3 two-finger moves).
Response: Thank you so much for spotting this! The example shown in the figure is incorrect. We apologize for the mistake. It should depict 3 single press moves, 2 two-finger moves and 1 three- finger move. The figure will be amended.
Done
In the results section:
- Regarding the "were followed by a positive ring tone and the unsuccessful ones by a negative ring tone", I suggest mentioning that there was also a positive visual (rewarding) effect.
Response: Thank you. A mention to the visual effect will be added for both the positive (successful) and negative (unsuccessful) trials. Done in page 7
- p 10. - Note a typo in the following sentence where the word "which" appears twice consecutively:
"Furthermore, both groups exhibited similar motor durations at asymptote which, which combined with the previous conclusion, indicates that OCD patients improved their motor learning more than controls, but to the same asymptote."
Response: Thank you for spotting this typo. The second word will be removed. Done
- I have a few suggestions with respect to Figure 3:
- keeping the y-axes scale similar in all subplots would be more visually informative.
Here we kept the y-axes scale similar in all subplots, except one of them, which was important to keep to capture all the data.
- For the subplots in 3b I would recommend for the transparent regions, instead of the IQR, to use the median +/- 1.57 * IQR/sqrt(n) which is equivalent to how the notches are calculated in a box-plot figure (It is referred to as an approximate 95% confidence interval for the median). This should make the transparent area narrower and thus better communicate the results.
Done
- I think the significant levels mentioned in figure legend 3b (which are referring to the group effect measured for each reward schedule type separately) is not mentioned in the text. While not crucial, maybe consider adding it in the text.
We don’t think this is necessary and may actually lead to confusion because in the text we report a Kruskal–Wallis H test (which is the most appropriate statistical test), including their H and p values for the group and reward effects. Since in the figure we separated the analysis and plots for variable and continuous reward schedules (for visual purposes) , we reported a U test separated for each reward schedule. Therefore, we consider that the correct statistics are reported in the appropriate places of the manuscript.
Response: Thank you for this very helpful suggestion. We will amend figure 3 accordingly.
- In the Automaticity results (pp. 12 and 13) when describing the Descriptive stats the wrong parameter indicator are used (DL instead of CL and nD instead of nC.
Response: Thank you for noticing it. We will amend.
Done
- In Sensitivity of IKI consistency (C) to reward results:
In Figure 6a legend: with respect to "... and for reward increments (∆R+, purple) and decrements (∆R-, green)" - note that there are also additional colors indicating these ∆Rs.
Response: Done. We had used a 2 x 2 color scheme: green hues for ∆R-, and purple hues for ∆R+. Then, OCD is denoted by dark colors, and HV by light colors. This represents all four colors used in the figure. For instance, OCD and ∆R- is dark green, whereas OCD and ∆R+ is denoted by dark purple.
- p.21 - the YBOCS abbreviation appears before the full form is spelled out in the text.
Response: In the revised version, we will make sure the YBOCS abbreviation will be spelled out the first time it is mentioned.
Done in page 24
Experiments 2 and 3:
- If there is a reason behind presenting the conditions sequentially rather than using intermixed trials in experiments 2 and 3, it would be useful to mention it in the text.
Response: Experiment 2 could have used intermixed trials. However, we were concerned that the use of intermixed trials in experiment 3 would increase excessively the memory load of the task, which could then be a confound.
Done in page 41
- I wonder whether the presentation order of the conditions in experiments 2 and 3 affected participants' results? Maybe it is worth adding this factor to the analysis.
Response: As we mentioned both in the methods and results sections, we counterbalanced all the conditions across participants, in both experiments 2 and 3. This procedure ensures no order effects.
Experiment 2:
- Regarding this sentence (pp. 21-22): "However, some participants still preferred the app sequence, specifically those with greater habitual tendencies, including patients who considered the app training beneficial." I think the part that mentions that there are "patients who considered the app training beneficial" appears below and it may confuse the reader. I suggest either providing a brief explanation or indicating that further details will be provided later in the text ("see below in...").
Response: We will clarify this section.
We added “see below exploratory analyses of “Mobile-app performance effect on symptomatology”” in the end of the sentence so that the reader knows this is further explained below. Page 25
- Finally, in addition to subgrouping maybe it is worth testing whether there is a correlation between the YBOCS score change and the app-sequences preference (as to learn if the more they change their YBOCS the more they prefer the learned sequences and vice versa?)
Response: Thank you for suggesting this relevant correlational analysis, which we have now conducted. Indeed, there is a correlation between the YBOCS score change and the preference for the app-sequences, meaning that the higher the symptom improvement after the month training, the greater the preference for the familiar/learned sequence. This is particularly the case for the experimental condition 2, when subjects are required to choose between the trained app sequence and any 3-move sequence (rs = 0.35, p=0.04). A trend was observed for the correlation between the YBOCS score change and the preference for the app-sequences in experimental condition 1 (app preferred sequence versus any 6-move sequence): rs = 0.30, p=0.09.
This finding represents an additional corroboration of our conclusion that the app seems to be more beneficial to patients more prone to routine habits, who are somewhat more averse to novelty.
This analysis was added in page 24, 25 and page 35.
Experiment 3:
- You mention "The task was conducted in a new context, which has been shown to promote reengagement of the goal system (Bouton, 2021)." In my understanding this observation is true also for experiment 2. In such case it should be stated earlier (probably under: "Phase B: Tests of actionsequence preference and goal/habit arbitration").
Response: As answered above in (Q17), we will follow this referee 2’s suggestion and describe the contextual details of experiments 2 and 3 in the Results section, when we introduce Phase B.
Done in page 21.
- w.r.t this sentence - "...that sequence (Figure 8b, no group effects (p = 0.210 and BF = 0.742, anecdotal evidence)" I would add what the anecdotal evidence refers (as done in other parts of the paper), to prevent potential confusion.
Response: OK, this will be added.
Added on page 27
Discussion:
- w.r.t. "Here we have trained a clinical population with moderately high baseline levels of stress and anxiety, with training sessions of a higher order of magnitude than in previous studies (de Wit et al., 2018, 2018; Gera et al., 2022) (30 days instead of 3 days)." The Gera et al. 2022 (was more than 3 days), you probably meant Gera et al. 2023 ("Characterizing habit learning in the human brain at the individual and group levels: a multi-modal MRI study", for which 3 days is true).
Response: Thank you for pointing this out. We will keep the citation to Gera et al 2022 given its relevance to the sentence but we will remove the information inside the parenthesis. This amendment will solve the issue raised here.
Done in page 32
- w.r.t "to a simple 2-element sequence with less training (Gera et al., 2022)" - it's a 3-element sequence in practice.
Response: Thank you for this correction. We will amend this sentence accordingly.
Done in page 32
- (p.30) w.r.t "and enhanced error-related negativity amplitudes in OCD" - a bit more context of what the negative amplitudes refer to would be useful (So the reader understands it refers to electrophysiology).
Response: We will add a sentence in our revised manuscript addressing this matter. This sentence has been removed in the revised manuscript
Supplementary materials:
- under "Sample size for the reward sensitivity analysis":
It is stated "One practice corresponded to 20 correctly performed sequences. We therefore split the total number of correct sequences into four bins." I was not able to follow this reasoning here (20 correct trials in practice => splitting the data the 4 bins). More clarity here would be useful.
Response: We will clarify this procedure of our analysis in the revised version of the manuscript. Thanks.
Done. See Supplementary materials.
- Also, maybe I am missing something, but I couldn't understand why the number of sequences available per bin is different for the calculation of ∆MT and C. Aren't any two consecutive sequences that are good for the calculation of one of these measures also good for the calculation of the other?
Response: Thank you for pointing this out. Indeed, the number of trials was the same for both analyses, ∆MT and C. We had saved an incorrect variable as number of trials. We will amend the text.
We have re-analyzed the trial number data. The average number of trials per bin both for the ∆MT and C analyses was 109 (9) in the HV and 127 (12) in OCD groups. Although the number was on average larger in the patient group, we did not find significant differences between groups (p = 0.47).
When assessing the p(∆T|∆R+) and p(∆T|∆R-) separately, more trials were available for p(∆T|∆R+), 107 (10) , than for p(∆T|∆R-), and 98 (8). These trial numbers differed significantly (p = 0.0046), but were identical for ∆MT and C analyses.
Done. Included in Supplementary materials.
Minor comments:
- Not crucial, but maybe for the sake of consistency consider merging the "Self-reported habit tendencies" section and the "Other self-reported symptoms" section, preferably where the latter is currently placed.
Response: We fully understand the referee’s rationale underlying this suggestion. We indeed considered initially presenting the self-reported questionnaires all together, in a last, single section of the results, as suggested by the referee. However, we decided to report the higher habitual tendencies of OCD as an initial set of results, not only because it is a novel and important finding (which justifies it to be highlighted) but also because it is essential to the understanding of some of the remaining results presented.
- In some figure legends the percentage of the interval of the mentioned confidence intervals (probably 95%) is missing. I suggest adding it.
Response: OK, this will be added.
Done
- The NHS abbreviation appears without spelling out the full form.
Response: This will be amended accordingly.
I removed NHS as it is not relevant.
- In p.38 the citation (Rouder et al., 2012) is duplicated (appears twice consecutively).
Response: Thank you for pointing this out. We will amend accordingly.
Done
In the results section:
- The authors mention: "To promote motivation, the total points achieved on each daily training sessions were also shown, so participants could see how well they improved across days". Yet, if the score is based on the number of practices, it may not represent participants improvement in case in some days more practices are performed. I suggest to clarify this point.
Response: The goal of providing the scoring feedback was, as explained in the sentence, to gauge motivation and inform the subject about their performance. Having this goal in mind, it does not really matter if one day their scoring would be higher simply because they would have done more practice on that day. Participants could easily understand that the scoring reflected their performance on each practice so they would realize that the more practice, the greater their improvement and that the scoring would increase across days of practice. We will amend the sentence to the following: "To promote motivation, the total points achieved on each training session (i.e. practice) was also shown, so participants could see how well they improved across practice and across days".
Done in page 7 and 8.