The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.https://doi.org/10.7554/eLife.10762.001
According to most textbooks, our brain is divided into separate areas that are dedicated to specific senses. We have a visual cortex for vision, a tactile cortex for touch, and so on. However, researchers suspect that this division might not be as fixed as the textbooks say. For example, blind people can switch their 'leftover' visual cortex to non-visual purposes, such as reading Braille – a tactile alphabet.
Can this switch in functional organization also happen in healthy people with normal vision? To investigate this, Siuda-Krzywicka, Bola et al. taught a group of healthy, sighted people to read Braille by touch, and monitored the changes in brain activity that this caused using a technique called functional magnetic resonance imaging. According to textbooks, tactile reading should engage the tactile cortex. Yet, the experiment revealed that the brain activity critical for reading Braille by touch did not occur in the volunteers’ tactile cortex, but in their visual cortex.
Further experiments used a technique called transcranial magnetic stimulation to suppress the activity of the visual cortex of the volunteers. This impaired their ability to read Braille by touch. This is a clear-cut proof that sighted adults can re-program their visual cortex for non-visual, tactile purposes.
These results show that intensive training in a complex task can overcome the sensory division-of-labor of our brain. This indicates that our brain is much more flexible than previously thought, and that such flexibility might occur when we learn everyday, complex skills such as driving a car or playing a musical instrument.
The next question that follows from this work is: what enables the brain’s activity to change after learning to read Braille? To understand this, Siuda-Krzywicka, Bola et al. are currently exploring how the physical structure of the brain changes as a result of a person acquiring the ability to read Braille by touch.https://doi.org/10.7554/eLife.10762.002
The current view of neural plasticity in the adult brain sees it as ubiquitous but generally constrained by functional boundaries (Hoffman and Logothetis, 2009). At the systems level, experience-driven plasticity is thought to operate within the limits of sensory divisions, where the visual cortex processes visual stimuli and responds to visual training, the tactile cortex processes tactile stimuli and responds to tactile training, and so on. A departure from this rule is usually reported only during large-scale reorganization induced by sensory loss or injury (Hirsch et al., 2015; Lomber et al., 2011; Merabet and Pascual-Leone, 2010; Pavani and Roder, 2012). The ventral visual cortex, in particular, is activated in blind subjects who read Braille (Büchel et al., 1998; Reich et al., 2011; Sadato et al., 2002, 1996) and lesions of this area impair Braille reading (Hamilton et al., 2000). This visual cortex thus has the innate connectivity required to carry out a complex perceptual task – reading – in a modality different than vision. It is unclear, however, to what extent this connectivity is preserved and functional in the normal, adult brain.
A growing amount of evidence suggests that some form of cross-modal recruitment of the visual cortex could be possible in the normal healthy adults (Amedi et al., 2007; Kim and Zatorre, 2011; Powers et al., 2012; Saito et al., 2006; Zangenehpour and Zatorre, 2010; Merabet et al., 2004; Zangaladze et al., 1999). Nonetheless, the behavioural relevance of these cortical mechanisms remains unclear, especially for complex stimuli. Notably, several experiments failed to find such cross-modal reorganization in sighted subjects, even after extensive training (Kupers et al., 2006; Ptito et al., 2005). One study found cross-modal plastic changes in subjects that were blindfolded for several days (Merabet et al., 2008), but these plastic changes quickly vanished once the subjects removed their blindfolds.
Here, for the first time, we show that large-scale, cross-modal cortical reorganization is a viable, adaptive mechanism in the sighted, adult brain. In our experiment, sighted adults followed a 9-month Braille reading course. The resulting cortical changes were tracked using task-based and resting-state functional Magnetic Resonance Imaging (fMRI) and manipulated with Transcranial Magnetic Stimulation (TMS).
Twenty-nine sighted adults, Braille teachers and professionals, students specializing in the education of the blind, and family members of blind people, took part in the study. Some subjects were familiar with Braille signs visually (some teachers actually knew how to read visual Braille upside-down, as upside-down was the usual orientation at which they viewed their students’ work laid out on school benches). All of them were naive in tactile Braille reading (Appendix 1.1). All the participants completed an intensive, tailor-made, 9-month-long tactile Braille reading course (Materials and methods). The majority progressed significantly in tactile reading, reaching an average performance of 6.20 words per minute read aloud (WPM) (SD=3.94, range = 0–17, Appendix 1.1) at the end of the course. Because the course curriculum included learning to recognize Braille letters by sight, the subjects also improved in reading Braille characters visually (Appendix 1.2).
Before and after the tactile Braille reading course, the subjects took part in an fMRI experiment (Figure 1A–C; 'Materials and methods'), in which they viewed regular visual words, Braille words displayed visually on a screen (visual Braille words) and touched tactile Braille words. Additional control conditions included visual strings of hash symbols (control for visual words), visual strings of pseudo-Braille characters (control for visual Braille), and tactile strings of pseudo-Braille characters (control for tactile Braille) (Figure 1B and 'Materials and methods'; a pseudo-Braille character contains six dots and has no meaning). The subjects could not see any of the tactile stimuli. Subjects also performed a control mental imagery fMRI experiment ('Materials and methods', Figure 1—figure supplements 1 and 2, Appendix 1.5).
Whole-brain fMRI analysis showed that after the course, activation to tactile Braille reading relative to the tactile control condition peaked within the Visual Word Form Area (VWFA), (Figure 1D, peak MNI = -45 -58 -12, Z = 5.18), a ventral visual stream (Van Essen, 2005) region known to be involved in visual reading (Dehaene and Cohen, 2011; Price and Devlin, 2011; Szwed et al., 2011). Additional activations were observed in the intraparietal sulcus, supramarginal gyrus, and frontal areas (Table 1). We observed the same increase in VWFA activation when we contrasted tactile Braille activation before and after the course [tactile Braille vs. tactile control x (after course-before course)] (Figure 1E, peak: MNI = -45 -64 -12, Z=5.02). This contrast also revealed clusters in the middle occipital gyrus, cuneus (BA18), the left superior temporal gyrus, and frontal areas (Table 1). To determine whether these changes were due to a general activity increase for all reading conditions, we computed a whole-brain ANOVA interaction analysis of the signal change following the course for reading visual words, visual Braille words and tactile Braille words versus their respective control tasks ([tactile Braille vs. tactile control - (visual Braille vs. visual Braille control+visual words vs. visual words control)] x (after course-before course)) (see 'Materials and methods'). Relative to visual reading, tactile reading led to increased activation in the VWFA (Figure 1F; MNI -45 -61 -12, Z = 4.05) and other parietal and frontal areas (Table 1).
The results demonstrated that this pattern of activation in visual cortex was observed specifically for tactile reading. Indeed, we found no increase in activation to visual words following the course. The only increases in activation to visual Braille were found in the default mode network nodes in the parietal and prefrontal cortices (Figure 1—figure supplement 3C, Table 2, Appendix 1.3). Importantly, the analyses mentioned in this section (Figure 1D–F), including the whole-brain ANOVA, did not reveal any activation in the primary and secondary somatosensory cortices, even at an exploratory threshold of p=0.01 voxel-wise.
Our subjects' progress in tactile reading was not homogeneous, with some subjects being entirely unable to learn Braille at all (0 WPM) and other subjects reaching a speed of 17 WPM (Appendix 1.1). We therefore used regression to ask which fMRI responses to tactile Braille reading were modulated by the subjects’ tactile reading speed. This regression analysis revealed one significant cluster, located in the left inferior occipital gyrus (Figure 1G; MNI = -45 -76 -12, Z = 3.69, Table 3). A similar result was obtained when we correlated single letter recognition speed with tactile Braille activations, which further supports the significance of the visual cortex in learning to read Braille (Table 3, Appendix 1.4). We found no such correlations for other reading speed measures (e.g. visual Braille reading speed) or imagery activations (e.g. imagining tactile Braille; see Appendix 1.5). These observations indicate that the ventral visual activations for tactile reading cannot be explained as a by-product of imagery.
Visual words, visual Braille and tactile Braille all elicited activity in the VWFA. How similar are the neural representations of these three scripts? The similarity of neural representations can be studied with multivariate pattern analysis-representational similarity analysis (MVPA-RSA). This method is based on the premise that stimuli that share similar neural representations will generate similar voxel activation patterns (Kriegeskorte et al., 2008; Rothlein and Rapp, 2014). Using MVPA-RSA ('Materials and methods'), we found that the two Braille conditions (visual and tactile) had the most similar activation patterns in the VWFA (Figure 2A,r=0.48), despite very large differences in the magnitude of the two activations (see Figure 2B). The correlations of the two Braille conditions with visual words were much weaker (Figure 2A, 'Materials and methods' and Appendix 1.6). Despite a difference in modality (visual vs. tactile), the neural representations of the two versions of Braille script were partially similar and distinct from the well-established representation for visual words.
A region-of-interest (ROI) analysis was then applied (see 'Materials and methods'). In the VWFA (Figure 2B; all ROIs are in the left hemisphere) following the Braille course, the response to tactile Braille words changed from de-activation to positive activation, resulting in a significant difference between tactile words and their control (interaction: F(1,28)=18.5; p<001). This emerging difference was also driven by a decrease in activation to control tactile stimuli. Post-hoc t-tests (one for tactile words and another for tactile control, before vs. after course) showed that the two effects were of similar magnitude and neither of them reached statistical significance on their own. The VWFA also showed strong responses to visual Braille words, similar to previously reported responses to novel visual alphabets (e.g. Szwed et al., 2014; Vogel et al., 2014). These responses remained unchanged throughout the course. The lateral occipital area (Figure 2C) showed a similar emergence of responses to tactile Braille words after the course as well.
Several experiments have shown changes in primary somatosensory (SI) cortex activation to tactile stimuli after tactile training, in both humans (e.g. Pleger et al., 2003) and rodents (e.g. Guic et al., 2008). In the current study, however, there was no significant change in activation to tactile Braille words and no after-course differences between tactile Braille words and the tactile control (Figure 2D). In the secondary somatosensory cortex (MNI: -51 -25 15), we found only a non-specific drop in activation to all tactile stimuli (F(1,28)=7.62, p=0.01), with no difference between tactile Braille words and the tactile control after the course. A drop in activation for the control condition was observed in the posterior attentional network (intraparietal sulcus, IPS, Figure 2E, t(28)=2.76, p=0.01). Those activation drops in the somatosensory cortices and in IPS are the most likely cause behind the activation drop for control tactile stimuli observed in the VWFA and LO (Figure 2B–C). In the primary motor cortex (MNI: -39 -25 59), we observed no such drop, which suggests that the finger-movement patterns across sessions and conditions remained unchanged. Thus, the Braille reading course led to an increase in activation to tactile Braille words in visual areas but not in the somatosensory cortex.
Learning can impact the resting-state activity of the brain (e.g. Lewis et al., 2009). Following the course, we observed an increase in resting-state functional connectivity between the VWFA seed and the left primary somatosensory cortex (Figure 3A, red; p<0.001, corrected for multiple comparisons; Z=4.57; MNI: -57 -24 45; 'Materials and methods'). This increase was the only statistically significant positive effect found. The same comparison showed a decrease in the VWFA’s functional connectivity with other visual areas, bilaterally. Furthermore, after the course, the VWFA-S1 functional connectivity level was correlated with the subjects’ progress in tactile Braille reading speed (in the month preceding the after-course scan, Figure 3; r(27)=0.49, p=0.007). The VWFA thus increased its coupling with the somatosensory cortex while decreasing its coupling with other visual areas. This VWFA-S1 functional connectivity was behaviorally relevant for tactile reading and was likely to be dynamically modulated in a relatively short learning period (similar to: Lewis et al., 2009) (see also Appendix 1.7).
Finally, to test the role of the VWFA in Braille reading, we performed a repetitive Transcranial Magnetic Stimulation (rTMS) experiment in which nine subjects were tested after the course in tactile Braille reading (reading speeds: 3–17 WPM). rTMS was applied to the VWFA and to two control sites – the lateral occipital area and the vertex. Both the VWFA and the lateral occipital area were localized using the individual subjects’ fMRI results ('Materials and methods'). Similar to a previous visual reading study (Duncan et al., 2010), we chose the lateral occipital area as an additional, negative control site, because TMS to this region evokes muscle contractions indistinguishable from VWFA stimulation. rTMS was applied while subjects performed a lexical decision task on words and pseudowords written in tactile Braille (see Figure 4A; 'Materials and methods'). Based on a previous visual reading study (Duncan et al., 2010), we expected that stimulation to the VWFA during the performance of the task would decrease accuracy for lexical decisions on Braille words. As predicted, TMS to the VWFA decreased the accuracy of reading Braille words (t(8)=3.02, p=0.016, Figure 4B). This TMS result shows that the VWFA is necessary for reading tactile Braille words.
We also tested for effects of TMS on control stimuli (Braille pseudowords) and control sites (lateral occipital area, vertex) (Figure 4B) and none of these tests resulted in significant effects (VWFA, Braille pseudowords: t(8)=0.03, p=0.977; lateral occipital area, Braille words: t(8)=0.18, p=0.859; lateral occipital area, Braille pseudowords: t(8)=0.03, p=0.979; vertex, Braille words: t(8)=1.26, p=0.243; vertex, Braille pseudowords: t(8)=0.02, p=0.986). However, given the small number of subjects in this part of our study, our TMS experiment was underpowered to statistically verify the specificity of the TMS effect: ANOVAs testing for specificity of our effect of interest showed a trend for an interaction between the stimulus type and TMS for the VWFA (F(1,8)=3.59, p=0.095), no interaction between the stimulus type, TMS, and the stimulation site (F(2,16)=0.41, p=0.580), and no interaction between TMS and the stimulation site for Braille words (F(2,16)=1.5, p=0.253).
Several experiments have already indicated that in some contexts, the sighted’s ventral visual cortex can contribute to the perception of tactile (reviewed in: Amedi et al., 2005) or auditory (Amedi et al., 2007) stimuli. It is also known that regions higher up in the sensory processing hierarchy, notably the antero-medial parts of the ventral temporal cortex (MNI y>-40), can host multisensory, abstract object representations (e.g. Fairhall and Caramazza, 2013; Kassuba et al., 2014). The left fusiform gyrus in particular is suggested to process object-specific crossmodal interactions (Kassuba et al., 2011). Our results demonstrate that the occipitotemporal visual cortex (VWFA, MNI y≈-60) can represent stimuli in a modality other than vision. The fact that TMS to the VWFA can disrupt tactile reading demonstrates the importance of this representation for sensory processing.
ROI analysis (Figure 2B) revealed that lateral occipital area (LOA) presented a pattern of activity increase to tactile words due to the course similar to the VWFA. However, TMS applied to LOA did not disturb tactile reading process (Figure 4). While the LOA is activated in various visual word recognition tasks (Duncan et al., 2009; Wright et al., 2008), its lesions seems not to affect reading itself (Cavina-Pratesi et al., 2015; Milner et al., 1991; Philipose et al., 2007). The increase of activity in both LOA and VWFA for tactile reading thus suggest that visual and tactile reading share similar neural correlates along the ventral visual stream. However, the exact function of LOA in reading seems to be more accessory than critical.
The ventral visual cortex was also activated when subjects heard auditory words and then imagined reading them in tactile Braille (Figure 1—figure supplement 2, Table 4). We also observed robust object activations in the object imagery task (ventral visual stream, left: MNI -45 -67 -9, Z=6.68, right: MNI 45 -67 -5, Z=5.94) which confirms that subjects successfully engaged in imagery during the experiment.
There are several reasons to rule out the possibility that the visual activation for tactile reading is a side-effect of mental imagery. First, if the visual activations were only a by-product of mental imagery, TMS to the VWFA would not interfere with Braille reading; yet, it did (Figure 4B). Second, training did not produce any changes in the activations to imagining tactile Braille (Appendix 1.5). Third, activations to imagining Braille in the VWFA did not correlate with tactile Braille reading proficiency (Appendix 1.4). Thus, visual cortex activations for tactile reading cannot be explained as a by-product of visual imagery. Instead, they constitute the signature of a new, tactile script representation emerging in the visual cortex.
Cortical reorganization that crosses the sensory boundaries is prominent in blind and deaf humans (Hirsch et al., 2015; Pavani and Roder, 2012; Sadato et al., 2002) and in congenitally deaf cats (Lomber et al., 2011). In the latter, the deprived auditory areas play a vital role in the deaf cats’ superior visual perception. The above-mentioned studies used the same paradigms in non-deprived humans and cats, but did not find any signs of cross-modal reorganization (e.g. Figure 7 in Sadato et al., 2002). One could have thus expected that learning to read by touch would lead to the emergence of a tactile reading area in the somatosensory cortex. Yet, our study produced a different result. Specific responses to tactile reading emerged not in somatosensory areas, but in visual areas.
This result might seem incompatible with a large body of data showing that tactile training leads to changes in somatosensory activation to tactile stimuli (Guic et al., 2008; Kupers et al., 2006; Pleger et al., 2003; Ptito et al., 2005), or decision-level frontal cortex changes (Sathian et al., 2013), but no visual cortex changes. However, all the above-mentioned experiments used simple stimuli, such as gratings, and short learning periods. Our experiment was longer (9 months) and used complex stimuli - entire Braille words. Experiments that study learning-related plasticity at multiple time points (Lövdén et al., 2013) suggest that at the initial stage of Braille learning, the somatosensory cortex might have increased its response to Braille words. Then, as the effects of early sensory learning consolidated in the somatosensory cortex, the cortical focus of learning shifted elsewhere, in our case the ventral visual stream.
Previous studies (Büchel et al., 1998; Lomber et al., 2011; Merabet and Pascual-Leone, 2010; Sadato et al., 2002, 1996) have suggested that cross-modal plasticity is possible mainly as a result of massive sensory deprivation or injury. Our results demonstrate that given a long training period and a complex task, the normal brain is capable of large-scale plasticity that overcomes the division into separate sensory cortices. The exact mechanisms of this plasticity might of course be different in deprived and non-deprived subjects.
Earlier reports of cross-modal plasticity in the normal brain might have already hinted at the possibility of cross-modal plasticity in the normal brain. Indeed, it was shown that some parts of the visual cortex can be activated during auditory or tactile tasks (Amedi et al., 2007; Kim and Zatorre, 2011; Powers et al., 2012; Saito et al., 2006; Zangenehpour and Zatorre, 2010). Professional pianists, for example, were shown to activate their auditory cortex when viewing mute video recordings of a piano key being pressed (Haslinger et al., 2005). The behavioral relevance of such activations, however, was demonstrated only for simple tactile stimuli: grating orientation (Zangaladze et al., 1999) and distance judgment (Merabet et al., 2004). Our study used a controlled, within-subject design and precise behavioral measures supplemented with a causal method, TMS. Its results suggest that large-scale plasticity is a viable mechanism recruited when learning complex skills. This conclusion is congruent with a recent comparative study showing that the morphology of cerebral cortex is substantially less genetically heritable in humans than in chimpanzees (Gomez-Robles et al., 2015). Such relaxed genetic control might be the reason for homo sapiens’ increased learning abilities and greater behavioral flexibility.
Despite evidence for cross-modal plasticity of the human brain, the dominant view still describes it as necessarily constrained by the sensory boundaries (e.g. Figure 18–2 in: Kandel et al., 2012). Our study provides a clear-cut evidence that learning-related neuroplasticity can overcome the sensory division. This calls for a re-assessment of our view of the functional organization of the brain.
Thirty-six subjects took part in the first fMRI study (32 females, 4 males; mean age = 29.17). All were right-handed, fluent in Polish and had normal or corrected-to-normal vision.
They were either Braille teachers/professionals (14 subjects), special education students specializing in blindness and related disabilities (11 subjects), or close relatives of blind people (4 subjects). All subjects had or was close to obtain higher education. All subjects showed high motivation to pursue the course. All but three subjects were familiar with Braille visually (see Appendix 1.2). However, all were naive in tactile Braille reading, which was verified by the baseline testing session (see Appendix 1.1). To ensure appropriate statistical power, prior to data collection we decided to recruit at least 30 subjects. The research described in this article was approved by the Commitee for Research Ethics of the Institute of Psychology of the Jagiellonian University (decisions 28/06/2012 and 12/03/2014). Informed consent and consent to publish were obtained from each of the participants in accord with best-practice guidelines for MRI and TMS research.
During the first fMRI examination, one subject was eliminated from the study due to a medical condition discovered by the neurologist supervising the study. Right after the first session, two subjects resigned for personal reasons. Another four subjects resigned during the tactile Braille reading course. Thus, 29 subjects completed the tactile Braille course and were included in the data analysis.
At the end of the course, nine of the subjects mentioned above who achieved tactile reading speeds between 3 to 17 WPM participated in the TMS experiment. All of them were female, and their ages ranged from 22 to 36 (M = 25.78, SD = 4.44).
For the purpose of this experiment, we developed a made-to-measure tactile Braille course. A detailed description of the course will be published elsewhere. Briefly, the course was based on individual exercises, to be performed while blindfolded. Subjects were instructed to train on one A4 sheet every day (approximately 20 min). The subjects’ progress was monitored by qualified Braille teachers during monthly meetings. The course lasted 9 months. Teaching began with simple tactile recognition exercises. Next, Braille letters were gradually introduced. In the second half of the course, subjects were encouraged to focus mainly on whole-word reading. Although our Braille course was focused on tactile reading, all the subjects checked their exercises visually, which required visual Braille reading training on a daily basis. Thus, we assumed that they would also progress in visual Braille reading.
The subjects' tactile reading speed was tested monthly, starting from the 5th month of the course. The test consisted of 40 3–6 letter words (frequency of occurrence higher than 1 per million, according to SUBTLEX-PL; Mandera et al., 2014), to be read aloud blindfolded. The task was to read aloud as many words as possible in 60 s. The reading strategy was not specified. If a word was read inaccurately, it was not counted in the final score. Five different versions of the test were used during the course, each containing different words. Because we expected that word length would have a significant impact on reading speed (Veispak et al., 2013, 2012), the word lists were ordered such that in each version of the test, words of a specific length appeared in exactly the same order (e.g. a three-letter word followed by a five-letter word, and so on). The distribution of word frequency was also similar across all versions of the test. Additionally, during each testing session, we measured a number of single Braille letters identified in one minute. The testing procedure and timing was identical to those in the whole-words reading test. The task was to read aloud as many letters as possible in 60 s. The order of the letters was counterbalanced between different versions of the test.
The speed and accuracy of visual Braille reading were measured with a lexical decision task. The subjects were instructed to visually read Braille strings appearing on the screen and decide, as fast and as accurately as possible, whether they formed a valid Polish word or not and to indicate their choice with a mouse button. Subjects completed the lexical decision task at the beginning and at the end (i.e. in the 9th month) of the course.
The complete set of stimuli consisted of 320 items (160 Polish words and 160 pseudowords). All words were of low-to-moderate frequency (from 1 to 10 occurrences per million). The neighborhood size, as defined by the OLD20 measure (Yarkoni et al., 2008) was equated between words and pseudowords. We used SUBTLEX-PL to obtain the psycholinguistic characteristics of the items (Mandera et al., 2014).
Each trial began with a fixation cross displayed at the center of the screen for 500 ms, followed by a white screen for 150 ms. Subsequently, one item was presented (word or pseudoword), which lasted on the screen for 8000 ms or until a response was given. Afterwards, a white screen was displayed for 1000 ms, and the next trial started. Items were presented in a random order. Responses were provided via pressing the left or the right button of the computer mouse. Target buttons for words and pseudowords were counterbalanced between participants.
All fMRI data were acquired on the same Siemens MAGNETOM Tim Trio 3T scanner (Siemens, München, Germany). The data from the fMRI reading experiment were collected using a 12-channel head coil. Resting-state fMRI data were acquired with a 32-channel head coil.
All data were collected using the same scanning parameters. We used a gradient-echo planar imaging sequence sensitive to blood oxygen level-dependent (BOLD) contrast (33 contiguous axial slices, phase encoding direction=posterior-anterior, 3.6 mm thickness, TR=2190 ms, angle=90°, TE=30 ms, base resolution=64, phase resolution=200, matrix=64x64, no iPAT).
The fMRI reading experiment was divided into four runs: the first two composed the main reading experiment (282 functional images for each run); the latter two composed the control imagery experiment (346 images in the first run, and 353 images in the second run). The resting-state data were collected in a separate scanning session, in a single run (282 volumes). In each scanning session, T1-weighted images were also acquired for anatomical localization.
To make acquisition conditions as similar as possible between the before- and after-training scans, in the before-course scan we measured each subject’s head position relative to the head coil. Then, in the after-course scan, we reproduced the previously measured position. In addition, we used a standard Siemens Automatic Alignment scout MRI sequence before the two scans.
The fMRI reading experiment consisted of two parts: the main experiment (Figure 1) and the control mental imagery experiment (see Figure 1—figure supplement 1). In both experiments, we used a custom-made fiberglass table for the presentation of tactile stimuli. The table was designed in a way that prevented the subjects from seeing the stimuli that were placed on it. To minimize the time needed for the presentation of tactile stimuli, we used a block design in both parts of the reading experiment. Stimulation was programmed in Presentation (Neurobehavioral Systems, San Francisco, CA).
In both experiments, the blocks were separated by 13–20 s rest periods. Each rest period started with an ascending sound, which signaled to the subjects that they should raise their fingers from the fiberglass table. Then, the experimenter switched the cardboard with the stimuli. To equalize all experimental conditions, an empty cardboard was slipped onto the table for visual trials as well. At the end of the rest period, subjects heard a 500 ms descending sound signifying that they should put their fingers back down on the table. (In the imagery experiment, this sound was preceded by the auditory cue, e.g. 'imagine objects'.) To prevent them from touching the tactile stimuli prematurely, the subjects were asked to refrain from touching the cardboard until the metronome sound (main experiment) or the auditory cue (control experiment) was heard, 4–7 s later.
In the main experiment, subjects were presented with Polish words and control stimuli. We used seventy-two 3- to 6-letter-long Polish words with a frequency higher than 1 occurrence per million according to SUBTLEX-PL; Mandera et al., 2014). All had low imageability scores (100–400 according to the MRC database; Wilson, 1988). There were six experimental conditions, each repeated four times. The conditions were: visually displayed Braille words (VB), visually displayed regular visual words (VW), words in tactile Braille (TB) and suitable control conditions. The words were counterbalanced between the conditions. The control conditions were: strings of 4–6 meaningless Braille signs composed of all 6 Braille dots presented either tactually or visually and strings of 4–6 hash signs (#) presented visually. TB words and tactile control stimuli were presented on cardboard sheets slipped onto the fiberglass table by the experimenter.
Each block contained 8 stimuli. Each stimulus was presented for 3500 ms, followed by a 500 ms fixation dot. The subjects heard a metronome sound at the onset of every stimulus. For the tactile conditions, the sound indicated that the subject should move his/her fingers to the next stimulus. We introduced this pace-setting manipulation because for tactile conditions, all 8 stimuli in a given block were presented on a single cardboard sheet. This pace-setting manipulation ensured that all eight stimuli were processed. Total block duration was 32 s.
The control imagery experiment was run immediately after the main experiment. In this experiment, the subjects first heard auditory words and then had to either read/touch or imagine them in TB, VB, and VW (Figure 1—figure supplement 1A,B,D). Three of the conditions: TB, VB, VW included the same stimulus types as the main experiment. The fourth condition (tactile objects) required the participants to touch objects: a plastic knife, a toothbrush, a plastic laundry clip, and a paintbrush, which were attached to a cardboard sheet and presented in a manner similar to the other tactile stimuli. Each of the four conditions was presented in two tasks (Figure 1—figure supplement 1A). In the first one, the tactile/visual stimulus was present, and the subject had to either read or touch it. In the second one, no tactile/visual stimulus was present, and the subject had to imagine it (Figure 1—figure supplement 1D).
Each block started with an auditory instruction indicating the task of the following block ('read', 'touch' or 'imagine') and the condition to be presented or imagined (VW, VB, TB, or objects; Figure 1—figure supplement 1B). There were 6 stimuli per block, except for the object conditions, which included a single stimulus only. Each stimulus was preceded by a 500 ms fixation dot and by its auditory description (e.g. 'toothbrush'). In the imagery conditions, this description was the only stimulus presented (the fixation cross remained on the screen). In the reading/touching condition, the auditory description was followed either by VW or VB words presented visually or by TB words or objects presented tactually. The stimulus presentation/imagining time was 3500 ms for words and 24,000 ms (the entire block) for objects.
A 10-min long resting-state scan was performed according to standard protocols. During the scan, the subjects were asked to fix their gaze on the point displayed on the screen and relax but to refrain from sleeping.
The data from the fMRI reading experiment were analyzed using SPM8 software (www.fil.ion.ucl.ac.uk/spm/software/spm8/). Data preprocessing included: 1) slice timing, 2) realignment of all EPI images from the before-course and the after-course scans together, 3) coregistration of the anatomical image from the first time point to the mean EPI image, 4) segmentation of the coregistered anatomical image, 5) normalization of all images to MNI space and 5) FWHM spatial smoothing (5 mm). The signal time course for each subject was modelled within a general linear model (Friston et al., 1995) derived by convolving a canonical hemodynamic response function with the time series of the experimental stimulus categories and estimated movement parameters as regressors. Statistical parametric maps of the t statistic resulting from linear contrasts of each stimulus type minus baseline were generated and stored as separate images for each subject. Contrast images were then entered into an ANOVA model for random group effect analysis. We used first level contrast of each reading condition vs. its respective control after vs. before the tactile Braille course to assess the interaction between the time point (before and after the course) and reading condition (Figure 1F). The SPM8 paired t-test was applied in pairwise comparisons of the reading conditions and their respective controls (e.g. tactile Braille vs. tactile Control) and in pairwise comparisons across two time points (before and after the Braille course).
In addition to the GLM-based activation analysis, we used SPM8 regression to examine 1) how tactile Braille reading proficiency modulated activations during tactile Braille reading (Figure 1G, Figure 1—figure supplement 3D) and during tactile and visual Braille imagining and 2) how visual Braille reading speed influenced activations in tactile Braille reading.
We applied a voxel-wise threshold of p<0.005 and a p<0.05 threshold for cluster extent, corrected for multiple comparisons using REST AlphaSim (1000 Monte Carlo simulations), across the whole brain, unless stated otherwise. Similar results were observed at a voxel-wise threshold of p<0.001, though not necessarily at cluster level-corrected levels of significance.
We applied representation similarity analysis (RSA) (Kriegeskorte et al., 2008) to compare the neural code of the three tested conditions: visual words, visual Braille, and tactile Braille. The individual visual word form area (VWFA) functional ROIs were defined as a set of 100 voxels that presented the highest t-value in the contrast of visual Word reading vs. visual word control before the course within the broad limits of the possible VWFA, that is, within a box of coordinates with the range: -50>x>-25, -68>y>-40, -25>z>-5. Inside this functional ROI, condition-related activity patterns were extracted for the three conditions of interest using the corresponding unsmoothed contrast maps: visual reading (visual words vs. control), visual Braille reading (visual Braille vs. control) and tactile Braille reading (tactile Braille vs. control), all after the course. Thus, for each voxel in the ROI, we extracted beta values for each reading condition. Then, those beta values were correlated pairwise (visual words x visual Braille, visual Braille x tactile Braille and tactile Braille x visual Braille). As a result, for each participant, we obtained a neural similarity matrix describing how representationally similar the activity patterns associated with the three conditions were. The resulting correlation coefficients for each condition pair were Fisher r-to-z transformed (z(r)) and compared on a group level in one-way repeated measures ANOVA with a factor of script pairs (3 levels: tactile Braille and visual Braille, tactile Braille and visual words, and visual Braille and visual words). Simple effects were analyzed using post hoc tests with Bonferroni correction. For simplicity of interpretation, in the main text we report the correlation coefficients (Andrews-Hanna et al., 2007).
To avoid double-dipping (Kriegeskorte et al., 2009), the ROIs were extracted from the contrast of visual word reading before the course, whereas the contrasts used for RSA were all taken after the course. Results similar to those presented in Figure 2A were obtained from the contrast of all reading conditions vs. their controls and from purely anatomically defined ROIs.
To avoid double-dipping (Kriegeskorte et al., 2009), in the ROI analysis of the main reading experiment, all ROIs were defined (Figure 2B–E) based on contrasts from the separate imagery experiment (Figure 1—figure supplement 1A).
ROIs were defined as 3x3 voxel cubes positioned at the peak of the relevant imagery experiment contrast, masked by an anatomical mask for the region in question (see below). The resulting peak MNI coordinates for Figs 2B–E are shown in the captions of each ROI subplot. For Figs 2B and 2E, we used the visual word reading contrast. In the case of Figure 2B, the y coordinate of the ROI was tethered at y=-57, the canonical y coordinate of the VWFA (Cohen et al., 2002), to avoid a bias toward non-specific visual activation. Such a bias would have otherwise led to the selection of a much more posterior ROI (the imagery experiment did not contain visual control stimuli that could have been used to correct this bias). For Figs 2C–D and for the secondary somatosensory (SII) and primary motor (MI) cortex results reported in the text, we used the object touch contrast minus the resting baseline. For Figure 2C (Lateral Occipital tactile visual area; Amedi et al., 2001), this contrast was further constrained by an anatomical mask of BA37. For the primary somatosensory cortex (Figure 2D), the activation was additionally constrained by a 15 mm sphere centered on coordinates reported in the literature as corresponding to the part of the primary somatosensory cortex that hosts the finger representation modified during tactile training (MNI -54, -20, 48; Pleger et al., 2003; similar ROI results were obtained for other SI definitions, such as constraining by a simple anatomical mask of primary somatosensory cortex). For the secondary somatosensory cortex (see main text), we used a mask made by merging Brodmann Areas 40 and 43 (WFU PickAtlas, http://www.fil.ion.ucl.ac.uk/spm/ext/) and further constraining them to the ceiling of the lateral sulcus, where the secondary somatosensory cortex is located (parietal operculum – see Eickhoff et al., 2006; Ruben et al., 2001). The error bars in Figure 2B–E represent the SEM across subjects after subtraction of the individual subjects' mean.
Data Processing Assistant for Resting-State fMRI (DPARSF; Chao-Gan and Yu-Feng, 2010) and SPM8 (www.fil.ion.ucl.ac.uk/spm/software/spm8/) were used to process the data. The first 10 volumes of each subject’s scan were discarded for signal stabilization and for subjects’ adaptation to scanner noise. Then, slice-timing correction and head-motion correction were applied. The magnitude of participant head motion was quantified by computing mean relative displacement (Van Dijk et al., 2012) and mean frame displacement (Power et al., 2012) measures. Both measures showed no difference in the magnitude of head motion between the first and second scanning session. T1 images were segmented, and both anatomical and functional images were normalized to MNI space.
Two steps specific to the functional connectivity analysis - regression of nuisance covariates and bandpass filtering - were performed to reduce spurious variance unlikely to reflect neuronal activity. Nuisance regression was performed first to avoid attenuation of the signal caused by mismatch in the frequencies of the data and regressors (Hallquist et al., 2013). Nuisance variables included: a) white matter signal; b) cerebrospinal fluid signal; c) 24 head motion parameters: 6 parameters of the current volume, 6 of the preceding volume and a quadratic term for each of these values (Friston et al., 1996, see also: Satterthwaite et al., 2013; Yan et al., 2013); and d) a separate regressor for every volume that displayed a mean frame displacement value higher than 0.5 (Power et al., 2012). Given recent evidence that global signal regression can disturb meaningful signal (Fox et al., 2009; Murphy et al., 2009; Weissenbacher et al., 2009), we did not include global signal as a regressor. After the regression of nuisance covariates, a bandpass filter (0.01–0.08 Hz) was applied. The resulting images were smoothed with a 5 mm FWHM Gaussian kernel.
For the whole-brain functional connectivity analysis, we defined a VWFA seed of interest (Figure 3) based on the fMRI activations shown in Figures 1–2. The seed was defined as the 20 most active voxels in [(tactile Braille vs. Control) x (after training > before training)] contrast (Figure 1E), in the left fusiform gyrus and inferior temporal gyrus regions (the mask was created using Harvard-Oxford Cortical Structures Atlas). The somatosensory cortex seed was defined as a sphere with a 4 mm radius that was centered on the same ROI as used for the ROI analysis shown in Figure 2D. Similar results were obtained with different ROI sizes (e.g. spheres with a 6 mm or 8 mm radius).
The functional connectivity measure (FC) was calculated according to standard procedures. In the whole-brain analysis, for each seed, subject, and scan, we computed a voxel-wise correlation of time courses between seed regions and the rest of the brain, which was then transformed into Fisher’s z value maps. In the ROI analysis, a BOLD time course was extracted for each seed, subject, and scan. The seed time courses were then correlated, creating a Pearson’s r correlation coefficient for each subject and scan. Fisher’s z transform was applied, to ensure normality for the t-test. Correlation coefficients were then correlated with the behavioral measure. Previous studies show that training can lead to rapid change in resting-state functional connectivity pattern (e.g. Lewis et al., 2009; Urner et al., 2013; Voss et al., 2012). We thus expected that final resting-state functional connectivity reflects subjects’ intensity of training in last days of the Braille course. This can be quantified as a change in Braille reading speed over last month of the course. This behavioral measure was thus used for correlation. Given our apriori hypothesis, uncorrected significance value is reported. However, the same result was obtained even when correction for multiple comparisons was applied to account for multiple behavioral sessions.
Whole-brain analysis was thresholded at p=0.001 voxel-wise and p=0.05 cluster-wise. The BrainNet Viewer toolbox was used for data visualization (Xia et al., 2013).
Three brain targets were chosen for neuro-navigated TMS, namely the VWFA and two control sites—the vertex and the lateral occipital area (LO). We chose the LO as an additional control site for the VWFA stimulation because it is a visual recognition region in proximity to the VWFA, and its stimulation produces the same unspecific effects. At the same time, TMS studies have shown that the LO is not engaged in the recognition of visually presented words (Duncan et al., 2010).
The VWFA and the LO were marked on each subject’s structural MRI scan based on individual fMRI data from reading experiments. The VWFA target was chosen as the before-course peak of the visual word reading versus the control condition in the main fMRI experiment (Figure 1B, Figure 1—figure supplement 3A;Dehaene and Cohen, 2011), restricted to the left fusiform gyrus. The LO target was chosen as the before-course peak of the tactile object recognition vs. baseline contrast in the control imagery fMRI experiment (Figure 1—figure supplement 1A; Amedi et al., 2001). The vertex was localized anatomically, on the T1 image of each subject.
A lexical decision task was used to measure the speed and accuracy of tactile Braille reading. Subjects were instructed to tactually read Braille strings appearing on a Braille display and to indicate whether they formed a valid Polish word as fast and as accurately as possible by pressing a key on the response pad (see 'procedure' below). Each trial began with a 650 ms blank period. Subsequently, one item was presented (a word or a pseudoword), which was displayed for 8000 ms or until the response was given. A 2000 ms blank period ensued, followed by the next trial. Items were presented in pseudorandom order. The task consisted of 360 trials, split into three runs. The runs lasted 10–15 min each, depending on the subject’s tactile reading speed. Responses were provided by pressing the left or right button on the response pad. Target buttons for words and pseudowords were counterbalanced between participants. Task difficulty was optimized for accuracy analysis, and the variability in accuracy was high (62–92%). Thus, we did not expect to find significant results in the reaction time analysis, which was confirmed with statistical tests.
Stimulus presentation and response logging were programmed in Presentation (Neurobehavioral Systems). Stimuli were displayed using the BraillePen 12 Touch (Harpo, Poznań, Poland), integrated with Presentation via in-house Presentation code. Responses were provided using the RB-830 Response Pad (Cedrus, San Pedro, USA).
The complete set of stimuli used in the TMS experiment consisted of 360 items (180 Polish words and 180 pseudowords). All items were three to six letters long. All words were of low-to-moderate frequency (from 1 to 20 occurrences per million, M = 5.08, SD = 5.06). The neighborhood size, as defined by the OLD20 measure (Yarkoni et al., 2008), was equivalent between words and pseudowords. Words and pseudowords were further divided into two lists, equivalent in length and neighborhood size; words were additionally matched for frequency. Items from one list were tested in the TMS condition (i.e. TMS was applied when they were being shown), whereas items from the other list were tested in the no-TMS control condition (i.e. no TMS was applied when they were being shown; Figure 4A). The lists were counterbalanced across subjects. The SUBTLEX-PL (Mandera et al., 2014) linguistic corpus was used to obtain the psycholinguistic characteristics of the items.
Pseudowords were constructed either by mixing letters from words used in the experiment (in words composed of one syllable) or by mixing syllables taken from words used in the experiment (in words composed of more than one syllable). We included only items that did not form a valid word but that were phonologically and orthographically plausible.
Repetitive TMS (rTMS) was applied pseudorandomly during half of trials in each run. During each run, TMS was delivered to one of three target sites – the VWFA, the LO or the vertex. Participants were not aware of the order of the target sites, and the order was counterbalanced across participants. TMS pulses were delivered at 1000, 1100, 1200, 1300, and 1400 ms after stimulus onset (i.e., 10 Hz for 500 ms). We decided to delay the TMS for 1 s because of the speed of participants’ tactile reading – we assumed that 1 s was sufficient to complete the early motor phase and to start the processes specifically linked to reading, which we wanted to disrupt in the experiment. The intensity of TMS was set to 100% of the resting motor threshold, defined as the lowest intensity needed to elicit a visible twitch of the hand that was kept relaxed by the participant. The TMS protocol used in this experiment is in agreement with established safety limits (Rossi et al., 2009). Similar protocols have been widely used to interfere with processing in the VWFA (Duncan et al., 2010) as well as in other brain regions (Göbel et al., 2001; Gough et al., 2005; Sandrini et al., 2008).
A MagPro X100 stimulator (MagVenture, Hückelhoven, Germany) with a 70 mm figure-eight coil was used to apply the TMS. Moreover, a frameless neuronavigation system (Brainsight software, Rogue Research, Montreal, Canada) was used with a Polaris Vicra infrared camera (Northern Digital, Waterloo, Ontario, Canada) to guide stimulation.
Participants were asked to fill out safety questionnaires. Next, they were familiarized with TMS, and the resting motor threshold was measured using single pulses applied to the hand area of the left primary motor cortex. Afterwards, subjects were asked to put on a blindfold while they performed the training session without TMS to familiarize themselves with the task. This first training session was followed by the second one, when participants were performing the task with rTMS to get used to the stimulation. After both training sessions, one target site was chosen and the main experiment began. All three target sites were tested one-by-one, in three separate runs, with 5 min breaks between them. The whole TMS experiment lasted approximately 90 min. The items used in the training session (words/pseudowords) were not used in the main session.
Braille word reading speed prior to the onset of the course. To make sure that our subjects were naïve in tactile Braille prior to the onset of the course, subjects were tested with the tactile Braille reading test before the first fMRI session (Materials and methods). From the 29 subjects that were included in the data analysis, 26 subjects were unable to read even a single word during the 60 s allowed in the test. One subject managed to read one word and two subjects read two words.
Single-letter recognition speed prior to the onset of the course. Additionally, we tested single Braille letter recognition, in a manner similar to that used for word reading (Materials and methods). The median number of Braille letters read per minute was 0 (mean=2.28), with individual scores ranging from 0 to 7 letters per minute. These results confirm that at the beginning of the course, the great majority of our subjects were utterly unable to read Braille by touch. In fact, their capacity to recognize even single Braille letters was also very limited.
Braille word reading speed after the course The distribution of final reading speed was normal (Shapiro-Wilk (29)=0.94, p = 0.123), with the mean=6.21, SD=3.94, median=5, mode=4, range 0-17. There were 2 subjects who read 0 words per minute at the end of the braille course.
Single-letter reading speed after the course. After the course, the subjects reached a mean performance of 18.41 single tactile letters read per minute (SD = 4.81). However, there was a large degree of variability in performance, with results ranging from 10 to 29 letters per minute.
Demographic background and tactile Braille reading speed. We tested if progress in tactile Braille reading was modulated by the demographic background of the subjects. To this end, we divided the subjects into three groups – Braille teachers/professionals (14 subjects), special education students specializing in blindness and related disabilities (11 subjects) and close relatives of blind people (4 subjects) – and entered this division into a regression analysis with final tactile Braille word reading speed as the dependent variable. However, we did not find any significant effect in the analysis (all p>0.2). To check if this insignificance was due to the evidence for the null hypothesis, or the insensitivity of the data, we computed Bayes factor with BayesFactor package implemented in R (Rouder and Morey, 2012). The lmBF function in the BayesFactor package estimates linear models and returns the Bayes factor (BF) of the model relative to a null model that predicts the data by the intercept alone. BF values greater than 1 would indicate evidence for the alternative over a null hypothesis, values less than 1 the converse, and BF close to 1 would indicate that data is insensitive (Dienes, 2014). BF equalled 0.35, therefore we can assume that there is a strong evidence towards the null hypothesis. We thus conclude that demographic background cannot predict the progress in tactile reading in our participants.
While all of our participants were naïve in tactile Braille, most of them knew how to read Braille visually. Moreover, the course curriculum included learning to recognize Braille letters by sight. As a result, subjects also improved their visual Braille reading speed.
We measured our subjects’ visual Braille reading speed with a lexical decision task in visual Braille (Materials and methods). The test was administered at the beginning and at the end of the course. Only subjects whose accuracy in each session was higher than 60% (chance level was 50%) were included in the reaction time analysis. On this basis, 3 subjects were excluded from the reaction time analysis. Moreover, one subject was excluded from both the accuracy and reaction time analysis due to missing data from the second testing session. Thus, the accuracy analysis was performed on 28 subjects and the reaction time analysis, on 25 subjects.
The mean accuracy (calculated as the mean of the individual subjects’ medians) was 78.25% (SD = 13.91%) at the beginning and 88.14% (SD = 8.75%) at the end of the course. The mean reaction time (RT) was 4587 ms (SD = 1071 ms) at the beginning and 3799 ms (SD = 846 ms) at the end of the course.
The difference between testing sessions was significant, both in the case of accuracy (t(27) = 4.72, p < 0.001) and of reaction time (t(24) = 7.05, p < 0.001). This result indicates that subjects also progressed in visual Braille reading during the Braille course.
Additionally, we transformed the reaction times for visual Braille words into a word per minute (WPM) measure to compare the effect of visual and tactile Braille proficiency on brain activity (Appendix 1.4). For this reason, we calculated each subject’s median RT (as it is more robust to outliers than the mean) in visual Braille reading, and then we divided 60 s by this median RT. This measure resulted in an estimated number of words that the subject could read in one minute. This measure was then implemented in an SPM regression model.
Before the course, visual word reading resulted in a cluster of activity within the left ventral visual stream, covering the inferior temporal gyrus, fusiform gyrus and the middle occipital gyrus (Figure 1—figure supplement 3A, Table 2). Interestingly, we found much more robust activity during visual Braille reading. Apart from the activity in the bilateral ventral occipital stream, this condition elicited activation in the bilateral frontal and parietal areas, predominantly in the left hemisphere, including the inferior frontal and precentral gyri, precuneus and middle temporal gyri (Figure 1—figure supplement 3B, Table 2). The VWFA activity for visual Braille reading was stronger than for visual words, similar to previously reported responses to novel visual alphabets (Vogel et al., 2014; Xue et al., 2006) Accordingly, the aforementioned visual Braille lexical decision task (see section 1.2) revealed a very slow recognition speed for visual Braille reading (detailed results will be published elsewhere).
The pattern of activity for visual words remained intact across the tactile Braille reading course. We found no suprathreshold voxels in the contrast of visual word reading after versus before the course, even at an exploratory threshold of p=0.01. The same comparison for visual Braille (before<after the course) revealed an after-course increase in activation in the bilateral precuneus, middle temporal gyrus, parietal lobules and mesial frontal areas (Figure 1—figure supplement 3C, Table 2). All of those areas are nodes of the default mode network (Raichle et al., 2001) and showed profound de-activation for all of the tasks in our experiment. Their increased engagement in visual Braille reading could suggest increased cognitive control over the process.
We used the behavioral measurements of tactile and visual Braille reading speed to see how proficiency in Braille reading can modulate the fMRI signal when performing a given reading task. We performed the following correlations:
Tactile Braille reading activations – TB single-letter recognition speed. We checked for signal modulation in tactile Braille reading after the course using the performance in single tactile letter reading. We found that tactile single-letter reading speed correlated positively with the activations localized only in the left ventral visual pathway, including the lingual gyrus, middle and inferior occipital gyri, and the fusiform gyrus (Figure 1—figure supplement 3D, Table 3). This result further confirms the engagement of the left ventral visual stream in tactile reading.
Tactile Braille reading activations – visual Braille reading speed. Visual Braille reading speed did not significantly modulate any tactile Braille reading activations in the visual system.
Tactile Braille imagery activations – tactile Braille reading speed. We found no correlations between tactile Braille proficiency and the signal when it was imagined, even at an exploratory threshold of p=0.01 voxel-wise.
Visual Braille imagery activations – tactile Braille reading speed. Tactile Braille proficiency did not modulate activity during visual Braille imagery even at an exploratory threshold of p=0.01 voxelwise.
In addition to the main experiment, our subjects also completed a control experiment aiming at exploring the imagery processes accompanying the process of tactile reading (see Materials and methods, and Figure 1—figure supplement 1A,B,D). In this experiment, their task was to read or imagine visual words, visual Braille words and tactile Braille words and to touch or imagine every-day-use objects.
Tactile and visual Braille imagery. Imagining tactile Braille words vs. rest after the course elicited robust activation in the bilateral cerebellum, bilateral precentral and postcentral gyri (predominantly in the left hemisphere), the left inferior and middle frontal gyri, the bilateral superior temporal sulcus and finally the left fusiform gyrus (MNI -45 -61 -12, BA37, Z = 5.95, p<0.001, Figure 1—figure supplement 2A, Table 4). A very similar pattern of activation was found for visual Braille imagine vs. rest after the course (Figure 1—figure supplement 2B, Table 4). Here, the fusiform peak was localized slightly more anteriorly: MNI -45 -58 -12 (BA37, Z = 6.38, p<0.001).
We then compared the two imagery conditions. For the visual Braille imagery vs. tactile Braille imagery contrast, we found no suprathreshold voxels, even at an exploratory level of p=0.01 voxel-wise). However, relative to visual Braille imagery, tactile Braille imagery caused more robust activation of the motor and somatosensory areas, the bilateral inferior parietal lobules and the cerebellum (Figure 1—figure supplement 2C). This finding is in accordance with data showing that motor imagery activates the primary motor and somatosensory cortices (Lotze and Halsband, 2006; Porro et al., 1996).
Visual words imagery. Imagining visual words before and after the course elicited activations in the medial frontal gyrus, left precentral gyrus and Broca’s area (left inferior frontal gyrus, BA44) as well as in the left inferior parietal lobule and the right cerebellum. When we lowered the threshold to p=0.005, uncorrected, we found also a cluster in the left occipital lobe, peaking in the fusiform gyrus (MNI -45 -58 -12, BA37, Z = 4.73).
Object imagery. Imagining objects before and after the course activated primary visual areas (BA18, BA19) bilaterally, the bilateral precentral and postcentral gyri and bilateral frontal areas, including the medial frontal gyrus and inferior frontal gyri. Temporal activations were also bilateral, with a peak in the right superior temporal gyrus.
Object touch vs. rest. In the contrast of touching objects vs. rest, we found bilateral lateral occipital activations, which peaked in the middle occipital gyrus (left: MNI -51 -64 -9, BA 19, Z=5.19; right: MNI 51 -64 12, BA37, Z=4.77, Figure 1—figure supplement 2D). This result confirms the existence of a visuo-haptic area called the lateral occipital tactile-visual (LOtv) area reported by Amedi and colleagues (Amedi et al., 2001). This area is known to be activated in visual and tactile object recognition tasks. Recently, it has also been shown to respond to shape information conveyed via the auditory modality through sensory substitution devices (Amedi et al., 2007).
Effects of the Braille course on imagery. Finally, none of the responses to the three imagery conditions showed any changes following the tactile Braille course. Interactions of the tactile Braille imagery, visual Braille imagery, and visual Word imagery contrasts with the after- vs. before-course states showed no suprathreshold voxels (neither positive nor negative), even at an exploratory threshold of p=0.01.
Statistical analysis was performed on Fisher z-transformed correlation coefficients (z(r)) – see Materials and Methods. The main effect of scripts’ pairs was highly significant (F(2,56)=14.53, p<0.001). Simple effects of scripts’ pairs were assessed using post-hoc tests with Bonferroni correction. The correlation between the two Braille conditions (mean z(r)=0.59, SD=0.08) was significantly higher than the correlation between tactile Braille and visual Words (mean z(r)=0.28, SD=0.09, p<0.001) and visual Braille and visual Words (mean z(r)=0.11, SD=0.07, p=0.020). The correlation between visual Braille and visual words was higher than the correlation between tactile Braille and visual Words, however this result was only marginally significant (p=0.063).
Whole-brain functional connectivity analysis with the VWFA as a seed region. The results are reported in the main text (Figure 3)
VWFA – left S1 functional connectivity ROI-based analysis. To confirm the increase of VWFA – left S1 functional connectivity that was observed in the whole-brain analysis (Figure 3), we performed an independent ROI analysis of functional connectivity between independently defined VWFAs and left S1 ROIs (see Materials and methods). Functional connectivity between these two regions increased following the Braille course, from r=0.20 to r=0.29 (t(28) = 2.3, p = 0.029; statistical comparison was performed on Fisher z-transformed correlation coefficients – see Materials and Methods).
Visuo-haptic object-related activation in the ventral visual pathwayNature Neuroscience 4:324–330.https://doi.org/10.1038/85201
The occipital cortex in the blind. lessons about plasticity and visionCurrent Directions in Psychological Science 14:306–311.https://doi.org/10.1111/j.0963-7214.2005.00387.x
Reprint of: visual processing of words in a patient with visual form agnosia: a behavioural and fMRI studyCortex; a Journal Devoted to the Study of the Nervous System and Behavior 72:97–114.https://doi.org/10.1016/j.cortex.2015.10.001
DPARSF: a MATLAB toolbox for "pipeline" data analysis of resting-state fMRIFrontiers in Systems Neuroscience 4:.https://doi.org/10.3389/fnsys.2010.00013
Language-specific tuning of visual cortex? functional properties of the visual word form areaBrain 125:1054–1069.
The unique role of the visual word form area in readingTrends in Cognitive Sciences 15:254–262.https://doi.org/10.1016/j.tics.2011.04.003
Using bayes to get the most out of non-significant resultsFrontiers in Psychology 5:.https://doi.org/10.3389/fpsyg.2014.00781
Investigating occipito-temporal contributions to reading with TMSJournal of Cognitive Neuroscience 22:739–750.https://doi.org/10.1162/jocn.2009.21207
Brain regions that represent amodal conceptual knowledgeJournal of Neuroscience 33:10552–10558.https://doi.org/10.1523/JNEUROSCI.0051-13.2013
The global signal and observed anticorrelated resting state brain networksJournal of Neurophysiology 101:3270–3283.https://doi.org/10.1152/jn.90777.2008
Movement-related effects in fMRI time-seriesMagnetic Resonance in Medicine 35:346–355.
Relaxed genetic control of cortical organization in human brains compared with chimpanzeesProceedings of the National Academy of Sciences of the United States of America 112:14799–14804.https://doi.org/10.1073/pnas.1512646112
Dissociating linguistic processes in the left inferior frontal cortex with transcranial magnetic stimulationThe Journal of Neuroscience 25:8010–8016.https://doi.org/10.1523/JNEUROSCI.2307-05.2005
Plasticity in primary somatosensory cortex resulting from environmentally enriched stimulation and sensory discrimination trainingBiological Research 41:425–437.
The mental number line and the human angular gyrusNeuroImage 14:1278–1289.https://doi.org/10.1006/nimg.2001.0927
Alexia for braille following bilateral occipital stroke in an early blind womanNeuroreport 11:237–240.
Transmodal sensorimotor networks during action observation in professional pianistsJournal of Cognitive Neuroscience 17:282–293.https://doi.org/10.1162/0898929053124893
Using structural and functional brain imaging to uncover how the brain adapts to blindnessAnnals of Neuroscience and Psychology 2:.
Cortical mechanisms of sensory learning and object recognitionPhilosophical Transactions of the Royal Society of London. Series B, Biological Sciences 364:321–329.https://doi.org/10.1098/rstb.2008.0271
Short-term plasticity of visuo-haptic object recognitionFrontiers in Psychology 5:.https://doi.org/10.3389/fpsyg.2014.00274
Tactile-auditory shape learning engages the lateral occipital complexThe Journal of Neuroscience 31:7848–7856.https://doi.org/10.1523/JNEUROSCI.3399-10.2011
Representational similarity analysis - connecting the branches of systems neuroscienceFrontiers in Systems Neuroscience 2:.https://doi.org/10.3389/neuro.06.004.2008
Circular analysis in systems neuroscience: the dangers of double dippingNature Neuroscience 12:535–540.https://doi.org/10.1038/nn.2303
Transcranial magnetic stimulation of the visual cortex induces somatotopically organized qualia in blind subjectsProceedings of the National Academy of Sciences of the United States of America 103:13256–13260.https://doi.org/10.1073/pnas.0602925103
Learning sculpts the spontaneous activity of the resting human brainProceedings of the National Academy of Sciences of the United States of America 106:17558–17563.https://doi.org/10.1073/pnas.0902455106
Structural brain plasticity in adult learning and developmentNeuroscience and Biobehavioral Reviews 37:2296–2310.https://doi.org/10.1016/j.neubiorev.2013.02.014
Subtlex-pl: subtitle-based word frequency estimates for polishBehavior Research Methods 47:471–483.https://doi.org/10.3758/s13428-014-0489-4
Feeling by sight or seeing by touch?Neuron 42:173–179.
Neural reorganization following sensory loss: the opportunity of changeNature Reviews. Neuroscience 11:44–52.https://doi.org/10.1038/nrn2758
Perception and action in 'visual form agnosia'Brain 114:405–428.
BookCross-Modal Plasticity as a Consequence of Sensory Loss: Insights from Blindness and DeafnessIn: Stein BE, editors. The New Handbook of Multisensory Processing. Boston: The MIT Press. pp. 737–759.
Neural regions essential for reading and spelling of words and pseudowordsAnnals of Neurology 62:481–492.https://doi.org/10.1002/ana.21182
Functional imaging of perceptual learning in human primary and secondary somatosensory cortexNeuron 40:643–653.
Primary motor and sensory cortex activation during motor performance and motor imagery: a functional magnetic resonance imaging studyThe Journal of Neuroscience 16:7688–7698.
Neural correlates of multisensory perceptual learningThe Journal of Neuroscience : The Official Journal of the Society for Neuroscience 32:6263–6274.https://doi.org/10.1523/JNEUROSCI.6138-11.2012
The interactive account of ventral occipitotemporal contributions to readingTrends in Cognitive Sciences 15:246–253.https://doi.org/10.1016/j.tics.2011.04.001
Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blindBrain : A Journal of Neurology 128:606–614.https://doi.org/10.1093/brain/awh380
A ventral visual stream reading center independent of visual experienceCurrent Biology : CB 21:363–368.https://doi.org/10.1016/j.cub.2011.01.040
Safety, ethical considerations, and application guidelines for the use of transcranial magnetic stimulation in clinical practice and researchClinical Neurophysiology : Official Journal of the International Federation of Clinical Neurophysiology 120:2008–2039.https://doi.org/10.1016/j.clinph.2009.08.016
Default bayes factors for model selection in regressionMultivariate Behavioral Research 47:877–903.https://doi.org/10.1080/00273171.2012.734737
Somatotopic organization of human secondary somatosensory cortexCerebral Cortex 11:463–473.https://doi.org/10.1093/cercor/11.5.463
Neural changes with tactile learning reflect decision-level reweighting of perceptual readoutJournal of Neuroscience 33:5387–5398.https://doi.org/10.1523/JNEUROSCI.3482-12.2013
Effects of literacy in early visual and occipitotemporal areas of chinese and french readersJournal of Cognitive Neuroscience 26:459–475.https://doi.org/10.1162/jocn_a_00499
Corticocortical and thalamocortical information flow in the primate visual systemProgress in Brain Research 149:173–185.https://doi.org/10.1016/S0079-6123(05)49013-5
Parallel versus sequential processing in print and braille readingResearch in Developmental Disabilities 33:2153–2163.https://doi.org/10.1016/j.ridd.2012.06.012
Differential cognitive and perceptual correlates of print reading versus braille readingResearch in Developmental Disabilities 34:372–385.https://doi.org/10.1016/j.ridd.2012.08.012
MRC psycholinguistic database: machine-usable dictionary, version 2.00Behavior Research Methods, Instruments, & Computers 20:6–10.https://doi.org/10.3758/BF03202594
Moving beyond coltheart's n: a new measure of orthographic similarityPsychonomic Bulletin & Review 15:971–979.https://doi.org/10.3758/PBR.15.5.971
Heidi Johansen-BergReviewing Editor; University of Oxford, United Kingdom
In the interests of transparency, eLife includes the editorial decision letter and accompanying author responses. A lightly edited version of the letter sent to the authors after peer review is shown, indicating the most substantive concerns; minor comments are not usually included.
Thank you for submitting your work entitled "Massive cortical reorganization in sighted Braille readers" for peer review at eLife. Your submission has been favorably evaluated by Timothy Behrens (Senior editor) and three reviewers, one of whom, Heidi Johansen-Berg, is a member of our Board of Reviewing Editors, and another is Krish Sathian.
The reviewers have discussed the reviews with one another and the Reviewing editor has drafted this decision to help you prepare a revised submission.
This is a novel and interesting study that provides clear and converging evidence on multimodal plasticity following Braille training in sighted individuals. The authors are to be especially commended for their incorporation of multiple control conditions, using interaction analyses and avoiding double-dipping.
1) For the ROI analysis (Figure 2B) the interaction in the VWFA (and IPS and LOA) is driven as much by increased deactivation to control stimuli as by increased activation for tactile Braille words. How should this be interpreted? Is there a significant increase in positive activation from pre to post training for tactile Braille words (i.e. as a post-hoc test to determine what is driving the interaction)?
2) TMS study – although the lateral occipital area was chosen here as a control site, doesn't the FMRI data (Figure 2C) suggest it shows a similar effect to the VWFA? How should this be reconciled?
3) In the Discussion, it should be explained why several experiments failed to find similar reorganization in sighted subjects, even after extensive training (Kupers et al., 2006; Ptito et al., 2005).
4) Figure 1G shows a correlation between post-training activity during tactile Braille reading and Braille reading speed. How many subjects had a reading speed of 0 WPM? Is the spread of reading speed suitable for a linear regression analysis?
5) For the correlation between resting connectivity strength and change in reading speed over the preceding month (Figure 3D) – how many behavioural variables were tested for correlation here and should some correction be made for multiple comparisons? Why change over the past month rather than some other time interval? And why change in reading speed rather than simply reading speed (when the latter was considered for Figure 1G for example)?
6) In the subsection “Representation Similarity Analysis”: The claim that the correlations of Braille conditions with the visual word condition were weaker than those between the two Braille conditions should be substantiated by appropriate statistical tests. The tests reported in Section 1.6 of the Appendix do not seem to address this issue.
7) The authors used a rather rapid rate of rTMS (10 Hz). Although this has indeed been used to suppress activity as in the prior study of rTMS over the VWFA, this rate is sometimes considered facilitatory. Could the authors comment on this?
8) In the Introduction, considerable attention is devoted to the idea that the behavioral relevance of the cross-modal recruitment of visual cortex demonstrated in the present study is highly novel. Without detracting from the novelty of the study, it would be appropriate to acknowledge earlier studies that did demonstrate such behavioral relevance, e.g., Zangaladze et al., Nature, 401: 587-590, 1999; Merabet et al., Neuron, 42: 173-179, 2004.https://doi.org/10.7554/eLife.10762.019
Essential revisions: 1) For the ROI analysis (Figure 2B) the interaction in the VWFA (and IPS and LOA) is driven as much by increased deactivation to control stimuli as by increased activation for tactile Braille words. How should this be interpreted? Is there a significant increase in positive activation from pre to post training for tactile Braille words (i.e. as a post-hoc test to determine what is driving the interaction)?
Thank you for helping us to clarify this important issue. Our analysis, which we outline below, shows that the interaction in the VWFA and LOA is driven by two effects. First, there is a decrease of activation to tactile control stimuli. Second, there is an increased activation for tactile Braille words. Post-hoc tests show that none of the two effects are significant on their own. In the VWFA, where the interaction is significant at p<0.001, the post-hoc paired t-tests show p=0.23 for tactile Braille words (after course) vs. tactile Braille words (before course) and p=0.38 for tactile control (after course) vs. tactile control (before course). In the LOA, where the interaction is significant at p=0.0047, the post-hoc paired t-tests show p=0.35 for tactile Braille words (after course) vs. tactile Braille words (before course) and p=0.41 for tactile control (after course) vs. tactile control (before course). In both locations, the interaction is driven by both factors. Per your request, we have clarified the above in the new version of the manuscript, with the following change to the description of the ROI results in the VWFA and the LOA:
“In the VWFA (Figure 2B; all ROIs are in the left hemisphere) following the Braille course, the response to tactile Braille words changed from de-activation to positive activation, resulting in a significant difference between tactile words and their control (interaction: p<0.001). […] These responses remained unchanged throughout the course. The lateral occipital area (Figure 2C) showed a similar emergence of responses to tactile Braille words after the course as well.”
As to what causes the drop of activation to control stimuli, we believe that the pattern of activation in SI and SII is very informative here. In SI, but most vividly in SII, there is a general drop in activation to all tactile stimuli after the course of 15 and 31 percent, respectively (Author response image 1).
This most likely reflects more efficient tactile processing. It is plausible that this decrease in the somatosensory cortex also led to a general decrease of somatosensory response in the higher visual cortex (Author response image 2). As the below figure explains, our interpretation is that the magnitude of the final response is most likely a result of two processes – a general decrease of activation for all somatosensory stimuli (both tactile Braille words and control), and a specific increase of activation for Braille words.
A second factor contributing to increased deactivation to control stimuli after the course might be a disengagement of attention from control stimuli. Before the course, our subjects could not distinguish Braille words from control stimuli. After the course, Braille words became clearly distinguishable to them. The subjects could therefore disengage their attention rapidly from tactile control stimuli. One can see this very significant (p<0.001) disengagement in ROI analysis of the IPS (Figure 2E). Disengagement of IPS for written word stimuli is a known hallmark of reading activations, that appears quite early during the course of reading development.
To reflect the above in the new version of the manuscript, we have made the following change to the description of the ROI results in the somatosensory cortices and the IPS:
“Those activation drops in the somatosensory cortices and in IPS most likely led to the activation drop for control tactile stimuli observed in the VWFA and LO (Figure 2B-C).”
2) TMS study –
although the lateral occipital area was chosen here as a control site, doesn't the FMRI data (Figure 2C) suggest it shows a similar effect to the VWFA? How should this be reconciled?
The Lateral Occipital area (LOA) plays a crucial role in object recognition (Grill-Spector et al., 2001). It is also activated in various word recognition tasks (Duncan et al., 2009; Wright et al., 2008). However, the exact role of LOA in reading remains unknown. Its lesion seems not to affect reading itself (Philipose et al., 2007). Milner et al., (1991), already reported a case study of a patient with left lateral occipital lesion who suffered from visual agnosia with reading functions relatively unimpaired. On the other hand, the LOA is known to be activated when advanced readers start reading a novel script. For example, French native speakers activated bilateral LOA when observing Chinese characters – stimuli that were not familiar to them (Szwed et al., 2014)(Author response image 3).
Based on those observations and a large body of existing results, including the TMS work by Zangaladze, Epstein, Grafton, and Sathian, (1999) and Merabet et al., (2004) discussed in the eighth paragraph of the Discussion, a plausible explanation is that the LOA is responsible for analysing object features, including tactile object features. However, it is not critical for visual and Braille word recognition.
The process of reading itself engages a more ventral area – VWFA, widely connected with left-hemispheric language areas (Bouhali et al., 2014). This is why TMS application to LOA does not influence reading accuracy, whereas inhibition of the VWFA does (Duncan, Pattamadilok, and Devlin, 2010, and our own results presented here). This is because, we believe, the LOA activity is not critical for visual and Braille word recognition. This interpretation is consistent with the fact that the main effect in the LOA was only in the ROI analysis. When we correlated tactile Braille final reading speed with the whole-brain activity to tactile Braille words vs. control after the course we found only a small cluster of 41 voxels (p=0.005 voxel-wise, uncorrected) located in the right LOA (MNI 51 -55 12). The increase of activity in both LOA and VWFA for tactile reading suggest that visual and tactile reading share similar neural correlates along the ventral visual stream. However, the exact function of LOA in reading seems to be more accessory than critical.
We have added following paragraph in the Discussion:
“ROI analysis (Figure 2B) revealed that lateral occipital area (LOA) presented a pattern of activity increase to tactile words due to the course similar to the VWFA. […] However, the exact function of LOA in reading seems to be more accessory than critical.”
3) In the Discussion, it should be explained why several experiments failed to find similar reorganization in sighted subjects, even after extensive training (Kupers et al., 2006; Ptito et al., 2005).
We believe that the main cause for different results between our study and the study of Kupers et al., 2006 and Ptito et al., 2005 was the length and the character of the training. In both papers subjects were trained in a simple tactile discrimination task. Ptito and colleagues trained their subjects for seven days, Kupers et al. – for two days. Our experiment on the other hand was longer (9 months) and used complex, entire words as stimuli. Indeed, experiments that study learning-related plasticity at multiple time points (Lövdén et al., 2013) suggest that at the initial stage of Braille learning, the somatosensory cortex might have increased its response to Braille words. Then, as the effects of early sensory learning consolidated in somatosensory cortex and the cortical focus of learning shifted elsewhere, in our case the ventral visual stream. Because the above-mentioned studies used simple stimuli and short learning periods, they captured only the initial reorganization in the somatosensory system.
The following lines were modified and added in the Discussion:
“Cortical reorganization that crosses the sensory boundaries is prominent in blind and deaf humans (Hirsch et al., 2015; Pavani and Roder, 2012; Sadato et al., 2002) and in congenitally deaf cats (Lomber et al., 2011). […] Then, as the effects of early sensory learning consolidated in the somatosensory cortex, the cortical focus of learning shifted elsewhere, in our case the ventral visual stream.
4) Figure 1G shows a correlation between post-training activity during tactile Braille reading and Braille reading speed. How many subjects had a reading speed of 0 WPM? Is the spread of reading speed suitable for a linear regression analysis?
There were 2 subjects who read 0 words per minute at the end of the Braille course. The distribution of reading speed was normal (Shapiro-Wilk (29)= 0.94, p = 0.123, Figure 1), with the mean =6.21, SD=3.94, median=5, range 0-17).
The subject who read 17 words per minute was an outlier (Figure 1B). In order to check whether the final outcome of the model was not due to the effect of this outlier, we recalculated the model excluding this subject. At p=0.005 voxelwise we found a cluster of 50 voxels including left middle occipital gyrus (z=3.45, BA19, MNI -42 -85 -1) and left inferior occipital gyrus (z=2.92, BA19, MNI -45 -73 -13, Figure 2).
We therefore demonstrate that while the outlier influenced cluster size and Z value, exclusion of the outlier did not change the localization of the effect: the only neural correlate of Braille reading speed was found in the left visual cortex.
We thus conclude that the spread of final Braille reading speed was suitable to perform regression analysis. As the exclusion of the outlier did not change dramatically the overall message of the SPM regression analysis, we decided not to exclude this data from the final model presented in the paper. We hope that this additional information answers the reviewers’ questions.
The following information was added to the Appendix 1.1:
“The distribution of final reading speed was normal (Shapiro-Wilk (29)=0.94, p = 0.123), with the mean=6.21, SD=3.94, median=5, mode=4, range 0-17. There were 2 subjects who read 0 words per minute at the end of the Braille course.”
5) For the correlation between resting connectivity strength and change in reading speed over the preceding month (Figure 3D) –
how many behavioural variables were tested for correlation here and should some correction be made for multiple comparisons? Why change over the past month rather than some other time interval? And why change in reading speed rather than simply reading speed (when the latter was considered for Figure 1G for example)?
We agree that the large number of behavioral sessions in our study pose a risk of improper use of statistical methods, which may lead to biased results. However, we believe that correlation presented in Figure 3B is valid and robust.
Previous studies show that training can lead to rapid change in resting-state functional connectivity pattern. For example, Lewis et al. (2009) demonstrated that 2 to 9 days of perceptual visual training modifies functional connectivity between the visual cortex and frontal regions. Urner et al. (2013) showed that even one training session in motion dots coherence detection task changes functional connectivity between hippocampus and striatum, and that this change is preserved a day after the training. Rapid changes in resting-state functional networks can be also observed following complex tasks learning. Voss et al., (2012) show that 20 hours of videogame training leads to changes in functional connectivity of fronto-executive network.
Based on the above-mentioned studies, we expected that final resting-state functional connectivity would mostly reflect subjects’ intensity of training in last days of the Braille course. In our study, this can be quantified as a change in Braille reading speed over last month of the course. This behavioral measure was thus used for correlation. Given our a priori hypothesis, we believe that applying correction for multiple comparisons is not necessary in this case. Note that absolute Braille reading speed is more likely to reflect regularity of training across the whole course. Such a measure is well suited for correlation with the task-based fMRI results (Figure 1G), but it is probably not optimal in the case of resting-state fMRI analysis.
Having this in mind, we want to stress that correlation presented in Figure 3B remains significant even when conservative correction for multiple comparisons is applied. In our study, behavioral data were collected in 6 testing sessions (beginning of the course, 5th month, 6th month, 7th month, 8th month and end of the course). Thus, correlating functional connectivity measure with all meaningful time intervals that can be formed (end of the course – 8th month, end of the course – 7th month, end of the course – 6th month, end of the course – 5th month and end of the course – beginning of the course), as well as with absolute final reading speed, would form 6 comparisons. Applying conservative Bonferroni correction to account for these comparisons would yield statistical threshold of p = 0.008 uncorrected, equal to p = 0.05 corrected for 6 comparisons. Correlation that we demonstrated in Figure 3B is significant at statistical threshold of p = 0.007 uncorrected, being equal to p = 0.042 Bonferroni-corrected for 6 multiple comparisons. Thus, the correlation presented in Figure 3B can be robustly observed even when we apply correction that accounts for a random search in all meaningful behavioral measures.
In summary, our choice of behavioral measure was based on a priori hypothesis and previous studies. We believe that applying correction for multiple comparisons is not necessary. However, applying a conservative correction would not change our results, which confirms that correlation presented in Figure 3B is robust. We are ready to report corrected significance value if reviewers ask us to do this.
To make these points clearer, we added following lines to the manuscript (Materials and methods):
“Previous studies show that training can lead to rapid change in resting-state functional connectivity pattern (e.g., Lewis et al., 2009, Urner et al., 2013; Voss et al., 2011). […] However, the same result was obtained even when correction for multiple comparisons was applied to account for multiple behavioural sessions.”
6) In the subsection “Representation Similarity Analysis”: The claim that the correlations of Braille conditions with the visual word condition were weaker than those between the two Braille conditions should be substantiated by appropriate statistical tests. The tests reported in Section 1.6 of the Appendix do not seem to address this issue.
We believe that the reviewers had in mind the Section 1.6 of the Appendix (Representation Similarity Analysis: p values). We agree thatthissection was perhaps too short and failed to adequately convey the statistical tests we actually did.
In that section, to compare similarities between examined scripts we calculated correlation coefficients of pairwise correlations between them (visual words x visual Braille, visual Braille x tactile Braille and tactile Braille x visual Braille). Correlation coefficients were then Fisher r to z transformed and compared using paired t-tests.
Thus, we performed paired t-tests of the correlation coefficients of following pairs:
1) Tactile Braille x visual Braille and visual Braille x visual words, t(28)=-2.94, p=0.007;
2) Tactile Braille x visual Braille and tactile Braille x visual words, t(28)=-5.21, p<0.001;
3) Tactile Braille x visual words and visual Braille x visual words, t(28)=-2.45, p=0.021.
Using t-tests in assessment of differences between neural patterns across different regions or different cognitive functions is a standard procedure (see e.g. Bannert and Bartels, 2013; Chikazoe, Lee, Kriegeskorte, and Anderson, 2014).
We believe that reviewers’ remark considered pairwise comparisons of mean correlation coefficients using paired T-tests. We understand this remark, as performing series of t-test without the correction for multiple comparisons can lead to increase probability of type I error. We therefore recalculated the data as following:
In order to address the relation between neural patterns of three examined scripts, we calculated a one-way repeated measures ANOVA with Fisher r-to-z transformed correlation coefficients (z(r)) as dependent variable and the factor of script pairs (3 levels: tactile Braille and visual Braille, tactile Braille and visual words, and visual Braille and visual words). The main effect of scripts’ pairs was highly significant (F(2,56)=14.53, p<0.001). Simple effects of scripts’ pairs were assessed using post-hoc tests with Bonferroni correction. The correlation between the two Braille conditions was significantly higher than the correlation between tactile Braille and visual Words (p<0.001) and visual Braille and visual Words (p=0.020). The correlation between visual Braille and visual words was higher than the correlation between tactile Braille and visual Words, however this result was only marginally significant (p=0.063).
To sum up, even when conservative Bonferroni correction for multiple comparisons was applied, the correlation between two Braille scripts remained significantly stronger than correlations between visual and Braille conditions. The correction affected however the level of statistical significance of the difference between correlations of visual words with Braille conditions. Corrected p values together with modified Methods section were inserted in the manuscript. We hope that this modification of statistical methods meets reviewers’ concerns.
The following lines were added to the main text (Materials and methods):
“The resulting correlation coefficients for each condition pair were Fisher r-to-z transformed (z(r)) and compared on a group level in one-way repeated measures ANOVA with a factor of script pairs (3 levels: tactile Braille and visual Braille, tactile Braille and visual words, and visual Braille and visual words). Simple effects were analyzed using post-hoc tests with Bonferroni correction.”
And to Appendix 1.6:
“The main effect of scripts’ pairs was highly significant (F(2,56)=14.53, p<0.001). Simple effects of scripts’ pairs were assessed using post-hoc tests with Bonferroni correction. The correlation between the two Braille conditions was significantly higher than the correlation between tactile Braille and visual Words (p<0.001) and visual Braille and visual Words (p=0.020). The correlation between visual Braille and visual words was higher than the correlation between tactile Braille and visual Words, however this result was only marginally significant (p=0.063).”
7) The authors used a rather rapid rate of rTMS (10 Hz). Although this has indeed been used to suppress activity as in the prior study of rTMS over the VWFA, this rate is sometimes considered facilitatory. Could the authors comment on this?
We agree that some studies suggest that low-frequency (i.e., ≤ 1Hz) stimulation decreases cortical excitability while high-frequency (i.e., ≥ 1Hz) stimulation increases cortical excitability. This classification of TMS frequencies can be clearly illustrated in the motor system where low-frequency rTMS delivered to primary motor cortex reduces the amplitude of motor evoke potential (MEP) while high-frequency rTMS enhances MEP amplitude (Berardelli et al., 1999; Chen et al., 1997; Jennum et al., 1995; Maeda et al., 2000; Pascual-Leone et al., 1994; Rossi et al., 2000). However, it is less clear that these findings generalize to areas outside the motor cortex. For instance, to induce speech arrest (i.e., disruption of speech production), rTMS at rather high frequencies (4-32 Hz) has been used over the left prefrontal cortex (Epstein et al., 1996; Jennum et al., 1994; Pascual-Leone et al., 1991). Similarly, the majority of studies using either high- or low-frequency rTMS to areas involved in cognitive processes showed disruptive, rather than facilitatory, effects on behavioural measures such as reaction times or accuracy (Sliwinska et al., 2014; Gough et al., 2005; Hartwigsen, et al., 2010; Pitcher et al., 2007; Pobric et al., 2010; Romero et al., 2006; Whitney et al., 2010). Consequently, these results demonstrate that it may be somewhat simplistic to classify a stimulation protocol as inhibitory or facilitatory based solely on the frequency of stimulation.
We admit that choosing a specific frequency of stimulation is challenging because different values are likely to work equally well. There are, however, some heuristic guidelines that helped us to constrain the choice. Low-frequency rTMS is used in off-line TMS experiments where long-lasting stimulation is believed to have an inhibitory after-effect lasting from 30-60 min, depending on the duration and intensity of the stimulation (Ridding and Rothwell, 2007). On-line experiments, such as ours, tend to use high-frequency rTMS to produce short-lasting inhibition of cognitive processes. Many studies have used 10 Hz stimulation during task performance to slow reaction times (Göbel et al., 2001) and/or induce errors (Hartwigsen et al., 2010). In fact, as mentioned in the Materials and methods section (subsection “TMS protocol”), we chose the specific paradigm that used rTMS at frequency of 10 Hz for 500 ms since it has proven to be very effective and robust for producing virtual lesions across different cortical areas (Bjoertomt et al., 2002; Duncan et al., 2010; Göbel et al., 2001; Hartwigsen, Price, et al., 2010; Lavidor and Walsh, 2003; Pitcher et al., 2007; Rushworth, Ellison, and Walsh, 2001), not only the Visual Word Form Area (Duncan et al., 2010).
8) In the Introduction, considerable attention is devoted to the idea that the behavioral relevance of the cross-modal recruitment of visual cortex demonstrated in the present study is highly novel. Without detracting from the novelty of the study, it would be appropriate to acknowledge earlier studies that did demonstrate such behavioral relevance, e.g., Zangaladze et al., Nature, 401: 587-590, 1999; Merabet et al., Neuron, 42: 173-179, 2004.
We agree with the reviewers that those papers add important information about the state of the art concerning visual cortex engagement in non-visual task. The pioneer work by Zangaladze et al., 1999, showed that TMS over occipital cortex interferes with tactile grating orientation task in the sighted, proved visual cortex engagement in tactile tasks. Merabet et al. (2004), demonstrated double dissociation between the engagement of somatosensory and visual cortex in tactile discrimination task in healthy sighted subjects. Low frequency rTMS applied to somatosensory cortex affected the judgment of roughness/texture, but not the distance between stimuli. rTMS to visual cortex yielded opposite results, disrupting the judgment of distance, but not roughness. Therefore, we now:
Mention the two (very relevant) citations in Introduction (second paragraph);
We removed the sentence: “none of them demonstrated that such cortical changes are behaviorally relevant”(Introduction, second paragraph)
Instead, we write: “the behavioural relevance of these cortical mechanisms remains unclear, especially for complex stimuli”.
We then discuss the two above-mentioned studies in the Discussion section:
“The behavioral relevance of such activations, however, was demonstrated only for simple tactile stimuli: grating orientation (Zangaladze et al., 1999) and distance judgement (Merabet et al., 2004).”https://doi.org/10.7554/eLife.10762.020
- Marcin Szwed
- Marcin Szwed
- Marcin Szwed
The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
Supported by a National Science Centre Poland grant (2012/05/E/HS6/03538), a Marie Curie Career Integration grant (618347) and funds from the Polish Ministry of Science and Higher Education for co-financing of international projects, years 2013-2017, awarded to MS. AM and KJ were supported by a grant from the National Science Center Poland (2014/14/M/HS6/00918). This project was realized with the aid of CePT research infrastructure purchased with funds from the European Regional Development Fund as part of the Innovative Economy Operational Programme, 2007–2013. We gratefully acknowledge Boris Gutkin, Christophe Pallier, Antonio Moreno, Valentina Borghesani, Karim N’dyaie, Małgorzata Kossut, Weronika Dębowska, Adam Ryba, Karolina Dukała, Avital Hahamy, Paweł Hanczur, the subjects, and the Polish blind community.
Human subjects: The research described in this article was approved by the Commitee for Research Ethics of the Institute of Psychology of the Jagiellonian University (decisions 28/06/2012 and 12/03/2014). Informed consent and consent to publish were obtained from each of the participants in accord with best-practice guidelines for MRI and TMS research.
- Heidi Johansen-Berg, University of Oxford, United Kingdom
© 2016, Siuda-Krzywicka et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Consumption of food and water is tightly regulated by the nervous system to maintain internal nutrient homeostasis. Although generally considered independently, interactions between hunger and thirst drives are important to coordinate competing needs. In Drosophila, four neurons called the interoceptive subesophageal zone neurons (ISNs) respond to intrinsic hunger and thirst signals to oppositely regulate sucrose and water ingestion. Here, we investigate the neural circuit downstream of the ISNs to examine how ingestion is regulated based on internal needs. Utilizing the recently available fly brain connectome, we find that the ISNs synapse with a novel cell-type bilateral T-shaped neuron (BiT) that projects to neuroendocrine centers. In vivo neural manipulations revealed that BiT oppositely regulates sugar and water ingestion. Neuroendocrine cells downstream of ISNs include several peptide-releasing and peptide-sensing neurons, including insulin producing cells (IPCs), crustacean cardioactive peptide (CCAP) neurons, and CCHamide-2 receptor isoform RA (CCHa2R-RA) neurons. These neurons contribute differentially to ingestion of sugar and water, with IPCs and CCAP neurons oppositely regulating sugar and water ingestion, and CCHa2R-RA neurons modulating only water ingestion. Thus, the decision to consume sugar or water occurs via regulation of a broad peptidergic network that integrates internal signals of nutritional state to generate nutrient-specific ingestion.
Complex behaviors depend on the coordinated activity of neural ensembles in interconnected brain areas. The behavioral function of such coordination, often measured as co-fluctuations in neural activity across areas, is poorly understood. One hypothesis is that rapidly varying co-fluctuations may be a signature of moment-by-moment task-relevant influences of one area on another. We tested this possibility for error-corrective adaptation of birdsong, a form of motor learning which has been hypothesized to depend on the top-down influence of a higher-order area, LMAN (lateral magnocellular nucleus of the anterior nidopallium), in shaping moment-by-moment output from a primary motor area, RA (robust nucleus of the arcopallium). In paired recordings of LMAN and RA in singing birds, we discovered a neural signature of a top-down influence of LMAN on RA, quantified as an LMAN-leading co-fluctuation in activity between these areas. During learning, this co-fluctuation strengthened in a premotor temporal window linked to the specific movement, sequential context, and acoustic modification associated with learning. Moreover, transient perturbation of LMAN activity specifically within this premotor window caused rapid occlusion of pitch modifications, consistent with LMAN conveying a temporally localized motor-biasing signal. Combined, our results reveal a dynamic top-down influence of LMAN on RA that varies on the rapid timescale of individual movements and is flexibly linked to contexts associated with learning. This finding indicates that inter-area co-fluctuations can be a signature of dynamic top-down influences that support complex behavior and its adaptation.