Abstract
1. Abstract
Experience-based plasticity of the human cortex mediates the influence of individual experience on cognition and behavior. The complete loss of a sensory modality is among the most extreme such experiences. Investigating such a selective, yet extreme change in experience allows for the characterization of experience-based plasticity at its boundaries.
Here, we investigated information processing in individuals who lost vision at birth or early in life by probing the processing of braille letter information. We characterized the transformation of braille letter information from sensory representations depending on the reading hand to perceptual representations that are independent of the reading hand.
Using a multivariate analysis framework in combination with fMRI, EEG and behavioral assessment, we tracked cortical braille representations in space and time, and probed their behavioral relevance.
We located sensory representations in tactile processing areas and perceptual representations in sighted reading areas, with the lateral occipital complex as a connecting “hinge” region. This elucidates the plasticity of the visually deprived brain in terms of information processing.
Regarding information processing in time, we found that sensory representations emerge before perceptual representations. This indicates that even extreme cases of brain plasticity adhere to a common temporal scheme in the progression from sensory to perceptual transformations.
Ascertaining behavioral relevance through perceived similarity ratings, we found that perceptual representations in sighted reading areas, but not sensory representations in tactile processing areas are suitably formatted to guide behavior.
Together, our results reveal a nuanced picture of both the potentials and limits of experience-dependent plasticity in the visually deprived brain.
2. Introduction
Human brains vary due to individual experiences. This so-called experience-based plasticity of the human cortex mediates cognitive and behavioral adaptation to changes in the environment 2. Typically, plasticity reflects learning from species-typical experiences. However, plasticity also results from species-atypical changes to experience like the loss of a sensory modality.
Sensory loss constitutes a selective, yet large-scale change in experience that offers a unique experimental opportunity to study cortical plasticity at its boundaries 3. One deeply investigated case of sensory loss is blindness, i.e., the lack of visual input to the brain. Previous research has shown that cortical structures most strongly activated by visual input in sighted brains are activated by a plethora of other cognitive functions in visually deprived brains 4, including braille reading 5–11. However, overlapping functional responses alone cannot inform us about the nature of the observed activations, i.e., what kind of information they represent and thus what role they play in cognitive processing.
To elucidate the nature of information processing in the visually deprived brain, we investigate the tactile braille system in individuals who lost vision at birth or early in life (hereafter blind participants). Braille readers commonly use both hands, requiring their brain to transform sensory tactile input into a hand-independent perceptual format. We made use of this practical everyday requirement to experimentally characterize the transformation of sensory to perceptual braille letter representations. We operationalize sensory braille letter representations as representations coding information specific to the hand that was reading (hand-dependent). In contrast, we operationalize perceptual braille letter representations as representations coding information independent of which hand was reading (hand-independent).
Combining this operationalization with fMRI and EEG in a multivariate analysis framework 12–15, we determine the cortical location and temporal emergence of sensory and perceptual representations. Lastly, to ascertain the functional role of the identified representations, we relate them to behavioral similarity ratings 16–19.
3. Results
We recorded fMRI (N = 15) and EEG (N = 11) data while blind participants (see Supplementary Table 1) read braille letters with their left or right index finger. We delivered the braille stimuli using single piezo-electric refreshable cells. This allowed participants to read braille letters without moving their finger, thus avoiding finger motion artifacts in the brain signal and analyses.
We used a common experimental paradigm for fMRI and EEG that was adapted to the specifics of each imaging modality. The common stimulus set consisted of ten different braille letters (Fig. 1a). Eight letters entered the main analysis. Two letters (E and O) served as vigilance targets to which participants responded with their foot; these trials were excluded from all analyses. The stimuli were presented in random order, with each trial consisting of a 500ms stimulus presentation to either the right or left hand. In fMRI, all trials were followed by an inter-stimulus interval (ISI) of 2,500ms to account for the sluggishness of the BOLD response (Fig. 1b). In EEG, standard trials had an ISI of 500ms while catch trials had an ISI of 1,100ms in order to avoid movement contaminations.
The common experimental paradigm for fMRI and EEG allowed us to use an equivalent multivariate classification scheme to track the transformation of sensory to perceptual representations. We assessed fMRI voxel patterns to reveal where sensory braille letter representations are located in the cortex. Likewise, we assessed EEG electrode patterns to reveal the temporal dynamics of braille letter representations (Fig. 1c).
We operationalized sensory versus perceptual representations as hand-dependent versus hand-independent braille letter representations, respectively. To measure perceptual representations, we trained classifiers on brain data recorded during stimulation of one hand and tested the classifiers on data recorded during the stimulation of the other hand (Fig. 1d). We refer to this analysis as across-hand classification. It reveals perceptual braille letter representations that are independent of the specific hand being stimulated.
To assess sensory representations, we used a two-step procedure. In a first step, we trained and tested classifiers on brain data recorded during stimulation of the same hand (Fig. 1d). We refer to this analysis as within-hand classification. It reveals both sensory and perceptual braille letter representations. Thus, to further isolate sensory representations from perceptual representations, in a second step, we subtracted across-hand classification from within-hand classification results.
3.1 Spatial dynamics of braille letter representations
We started the analyses by determining the locations of sensory and perceptual braille letter representations in the visually deprived brain using fMRI. We focused our investigation on two sets of cortical regions based on previous literature: tactile processing areas and sighted reading areas (Fig. 2a). Given the tactile nature of braille, we expected braille letters to be represented in the tactile processing stream encompassing somatosensory cortices (S1 and S2), intra-parietal cortex (IPS), and insula 20. Given that visual reading information is processed in a ventral processing stream 21–23, and that braille reading has been observed to elicit activations along those nodes 5–9, we investigated the sighted processing stream ranging from early visual cortex (EVC) 24 over V4 25 and the lateral occipital complex (LOC) 26 to the letter form area (LFA) 27 and visual word form area (VWFA) 25,26,28,29.
We hypothesized (H1) that sensory braille letter information is represented in tactile processing areas (H1.1), while perceptual braille letter representations are located in sighted reading areas (H1.2). To test H1, we conducted within-hand and across-hand classification of braille letters in the above-mentioned areas for tactile processing and sighted reading (Fig. 2a) in a region-of-interest (ROI) analysis.
We found that within-hand classification of braille letters was significantly above chance in regions associated with both tactile processing (S1, S2, aIPS, pIPS, insula) and sighted reading (EVC, LOC, LFA, VWFA) (Fig. 2b left; N = 15, one-tailed Wilcoxon signed-rank test, P < 0.05, FDR corrected). As expected, this reveals both tactile and sighted reading areas as potential candidate regions housing sensory and perceptual braille letter representations.
To pinpoint the loci of sensory representations we subtracted the results of the across-hand classification from the within-hand classification. We found the difference to be significant in tactile processing areas (S1, S2, aIPS and pIPS) and in LOC, but not elsewhere (Fig. 2b, middle). This confirms H1.1 in that sensory braille letter representations are located in tactile areas.
To determine the location of perceptual representations, we assessed the results of across-hand classification of braille letters. We found significant information (Fig. 2b, right) only in sighted reading areas (EVC, LOC, VWFA), with the exception of the insula. This confirms H1.2 in that perceptual braille letter representations emerge predominantly in sighted reading areas. The surprising finding of the insula can possibly be explained by the insula’s heterogeneous functions 30 beyond tactile processing.
Notably, LOC is the only region that contained both sensory and perceptual braille letter representations. This suggests a “hinge” function in the transformation from sensory to perceptual braille letter representations.
To ascertain whether other areas beyond our hypothesized ROIs contained braille letter representations, we conducted a spatially unbiased fMRI searchlight classification analysis. The results confirmed that braille letter representations are located in the assessed ROIs without revealing any additional regions (Fig. 2c, N = 15, height threshold P< 0.001, cluster-level FWE corrected P < 0.05).
Together, our fMRI results revealed sensory braille letter representations in tactile processing areas and perceptual braille letter representations in sighted reading areas, with LOC serving as a “hinge” region between them.
3.2 Temporal dynamics of braille letter representations
We next determined the temporal dynamics with which braille letter representations emerge using EEG.
We hypothesized (H2) that sensory braille letter representations emerge in time before perceptual braille letter representations, analogous to the sequential processing of sensory representations before perceptual representations in the visual 15,31–33 and auditory 34 domain.
To test H2, we conducted time-resolved within- and across-hand classification on EEG data. We determined the time point at which representations emerge by finding the first time point with respect to the onset of the braille stimulation where the classification effects are significant (fifty consecutive significant time point criteria, 95% confidence intervals reported in brackets).
The EEG classification analyses revealed significant and reliable results for both within- and across- hand classification of braille letters, as well as their difference (Fig. 3a; N = 11, 1,000 bootstraps, one-tailed Wilcoxon signed-rank test, P < 0.05, FDR corrected). We found that within-hand classification became significant at 62ms (29-111ms) (Fig. 3a, blue curve). To isolate sensory representations, we subtracted the results of the across-hand classification from the within-hand classification. This difference became significant at 77ms (45-138ms) (Fig. 3a, black curve). In contrast, the across-hand classification, indicating perceptual representations, became significant later at 184ms (127-230ms) (Fig. 3a, green curve).
Importantly, the temporal dynamics of sensory and perceptual representations differed significantly. Compared to sensory representations, the significance onset of perceptual representations was delayed by 107ms (21-167ms) (N = 11, 1,000 bootstraps, one-tailed bootstrap test against zero, P= 0.012).
To approximate the sources of the temporal signals, we complemented the EEG classification analysis with a searchlight classification analysis in EEG sensor space. In the time window of highest decodability (∼200-300ms), sensory braille letter information was decodable from widespread electrodes across the scalp. In contrast, perceptual braille letter information was decodable later and from overall fewer electrodes which were located over right frontal, central, left parietal and left temporal areas (Figure 3b; N = 11, one-tailed Wilcoxon signed-rank test, P < 0.05, FDR corrected across electrodes and time points). These additional results reinforce that sensory braille letter information is represented in more widespread brain areas than perceptual braille letter information, corroborating our ROI classification results.
In sum, our EEG results characterized the temporal dynamics with which braille letter representations emerge as a temporal sequence of sensory before perceptual representations.
3.3 Relating representations of braille letters to behavior
The ultimate goal of perception is to provide an organism with representations enabling adaptive behavior. The analyses across space and time described above identified potential candidates for such braille letter representations in the sense that the representations distinguished between braille letters. However, not all of these representations have to be used by the brain to guide behavior; some of these representations might be epiphenomenal and only available to the experimenter 35–37.
Therefore, we tested the hypothesis (H3) that the sensory and perceptual braille letter representations identified in space (H1) and time (H2) are in a suitable format to be behaviorally relevant. We used perceived similarity as a proxy for behavioral relevance. The idea is that if two stimuli are perceived to be similar, they will also elicit similar actions 16–19. For this, we acquired perceived similarity ratings from blind participants (N=19) in a separate behavioral experiment (Fig. 4a middle) in which participants verbally rated the similarity of each pair of braille letters from the stimulus set on a scale.
To relate perceived similarity ratings to neural representations, we used representational similarity analysis (RSA) 38. RSA relates different measurement spaces (such as EEG, fMRI and behavior) by abstracting them into a common representational similarity space. We sorted behavioral similarity ratings into representational dissimilarity matrices (RDMs) indexed in rows and columns by the 8 braille letters used in experimental conditions (Fig. 4a). For neural data, we sorted decoding accuracies from the previous analyses as a dissimilarity measure 32,39. Thus, by arranging the classification results of the EEG and fMRI results we obtained EEG RDMs for every time point and fMRI RDMs for each ROI (for the within-hand and the across-hand analyses separately). To finally relate behavioral and neural data in the common RDM space, we correlated the behavioral RDM with each fMRI ROI RDM and each EEG time point RDM.
Considering braille letter representations in space through fMRI (Fig. 4b, N = 15, one-tailed Wilcoxon signed-rank test, P < 0.05, FDR corrected), we found that previously identified perceptual representations (i.e., identified by the across-hand analysis) in EVC, LOC, VWFA, and insula showed significant correlations with behavior. In contrast, sensory representations (i.e., identified by the differences between within and across-hand analyses) in S1, S2, aIPS, pIPS and LOC were not significantly correlated with behavior. This indicates that perceptual braille letter representations in sighted reading areas are suitably formatted to guide behavior.
Considering braille letter representations in time through EEG (Fig. 4c; N = 11, 1,000 bootstraps, one-tailed Wilcoxon signed-rank test, P < 0.05, FDR corrected), we found significant relationships with behavior for both sensory and perceptual representations. The temporal dynamics mirrored those of the EEG classification analysis (Fig. 3a), in that the results related to sensory representations emerged earlier at 220ms (167-567ms) than the results related to perceptual representations at 466ms (249-877ms). The onset latency differences of 240ms (−81-636ms) (N = 11, 1,000 bootstrap, one-tailed bootstrap test against zero, P = 0.046) was significant. This indicates that both earlier sensory representations and later perceptual representations of braille letters are suitable formatted to guide behavior.
In sum, our RSA results highlighted that perceptual representations in sighted reading areas, as well as initial sensory and later perceptual representations in time, are suitably formatted to guide behavior.
4. Discussion
We assessed experience-based brain plasticity at its boundaries by investigating the nature of braille information processing in the visually deprived brain. For this, we assessed the transformation of sensory to perceptual braille letter representations in blind participants. Our experimental strategy combining fMRI, EEG, and behavioral assessment yielded three key findings about spatial distribution, temporal emergence and behavioral relevance. First, concerning the spatial distribution of braille letter representations, we found that sensory braille letter representations are located in tactile processing areas while perceptual braille letter representations are located in sighted reading areas. Second, concerning the temporal emergence of braille letter representations, we found that sensory braille letter representations emerge before perceptual braille letter representations. Third, concerning the behavioral relevance of representations, we found that perceptual representations identified in sighted reading areas, as well as sensory and perceptual representations identified in time, are suitably formatted to guide behavior.
4.1 The topography of sensory and perceptual braille letter representations
Previous research has identified the regions activated during braille reading in high detail 5–11. However, activation in a brain region alone does not indicate its functional role or the kind of information it represents 40. Here, we characterize the information represented in a region by distinguishing between sensory and perceptual representations of single braille letters. Our findings extend our understanding of the cortical regions processing braille letters in the visually deprived brain in five ways.
First, we clarified the role of EVC activations in braille reading 5–11 by showing that EVC harbors representations of single braille letters. More specifically, our finding that EVC represents perceptual rather than sensory braille letter information indicates that EVC representations are formatted at a higher perceptual level rather than a tactile input level. Previous studies also found that EVC of blind participants processes other higher-level information such as natural sounds 41,42 and language 43–52. A parsimonious view is that EVC in the visually deprived brain engages in higher-level computations shared across domains, rather than performing multiple distinct lower-level sensory computations. Importantly, higher-level computations are not limited to the EVC in visually deprived brains. Natural sound representations 41 and language activations 53 are also located in EVC of sighted participants. This suggests that EVC, in general, has the capacity to process higher-level information 54. Thus, EVC in the visually deprived brain might not be undergoing fundamental changes in brain organization 53. This promotes a view of brain plasticity in which the cortex is capable of dynamic adjustments within pre-existing computational capacity limits 4,53–55.
Second, we found that VWFA contains perceptual braille letter representations. By clarifying the representational format of language representations in VWFA, our results support previous findings of the VWFA being functionally selective for letter and word stimuli in the visually deprived brain 8,56,57.
Third, LOC represented hand-dependent and -independent braille letter information, suggesting a “hinge” function between sensory and perceptual braille letter representations. We stipulate that shape serves as an intermediate level representational format in between lower-level properties such as specific location of tactile stimulation or dot number in braille letters and higher-level perceptual letter features 58.
Fourth, the finding of letter representations of tactile origin in both VWFA and LOC indicate that the functional organization of both regions is multimodal, contributing to the debate on how experience from vision or other sensory modalities shapes representations along the ventral stream 58,59.
Fifth, we observed that the somatosensory cortices and intra-parietal sulci represent hand-dependent but not hand-independent braille letter representations. This is consistent with previous studies reporting that the primary somatosensory cortex represents the location of tactile stimulation 60, but not the identity of braille words 57. Taken together, these findings suggest that these tactile processing areas represent sensory rather than higher-level features of tactile inputs in visually deprived brains.
The involvement of the insula in processing braille letter information is more difficult to interpret. Based on previous studies in the sighted brain, the insula plays a role in tactile memory 61–63 and multisensory integration 64–67. Both aspects could have contributed to our findings as braille letters are retrievable from long-term memory but are also inherently nameable and linked to auditory experiences. A future study could disambiguate the contributions of tactile memory and multisensory integration by presenting meaningless dot arrays, that are either unnamable or paired with invented names. Insular representations of trained, unnamable stimuli but not novel, unnamable stimuli would align with memory requirements. Insular representations of trained, namable stimuli but not trained, unnamable stimuli would favor audio-tactile integration.
4.2 Sensory representations emerge before perceptual representations
Using time-resolved multivariate analysis of EEG data 15,31,33, we showed that hand-dependent, sensory braille letter representations emerge in time before hand-independent, perceptual representations. Such sequential multi-step processing in time is a general principle of information processing in the human brain, also known in the visual 15,31,32 and auditory 34 domain. Together, these findings suggest that the human brain, even in extreme instances of species-atypical cortical plasticity, honors this principle.
While braille letter reading follows the same temporal processing sequence as its visual counterpart, it operates on a different time scale. Our results indicate that braille letter classification peaks substantially later in time (∼200ms for hand-dependent and ∼390ms for hand-independent representations) than previously reported classification of visually presented words, letters, objects, and object categories (e.g., ∼125ms for location-dependent and ∼175ms for location-independent representations) 15,31,32,68,69. This discrepancy raises the question which factors could limit the speed of processing braille letters. Importantly, this delay is not a consequence of slower cortical processing in the tactile domain 70. We find that tactile information reaches the cortex fast: we can classify which hand was stimulated as early as 35ms after stimulation onset (Supplementary Figure 2). Thus, the delay relates directly to the identification of braille letters.
A compelling explanation for the temporal processing properties of braille letter information are the underlying reading mechanics. Braille reading is slower than print reading 71,72 even if participants are fluent braille readers. This slowing is specific to braille reading and does not translate to other types of information intake in the visually deprived brain, e.g., auditory information. Blind participants have higher listening rates 73 and better auditory discrimination skills 74 than sighted participants, indicating more efficient auditory processing 75,76. Together, this result pattern suggests that the temporal dynamics with which braille letter representations emerge are limited by the particular efficiency of the braille letter system, rather than the capacity of the brain. To test this idea, future studies could compare the temporal dynamics of braille letter and haptic object representations: a temporal processing delay specific to braille letters would support the hypothesis.
4.3 Representations identified in space and time guide behavior
Our results clarified that perceptual rather than sensory braille letter representations identified in space are suitably formatted to guide behavior. However, we acknowledge that this finding is task-dependent. Arguably, general similarity ratings of braille letters depend more on intake-independent (e.g., dot arrangements or linguistic similarities such as pronunciation) than intake-dependent features (similarities in stimulation location on finger). Future behavioral assessments could ask participants to assess similarity separately based on only stimulation location or linguistic features. We would predict that similarity ratings based on stimulation location are related to sensory representations while similarity ratings based on linguistic features are related to perceptual representations of braille letters.
Concerning temporal dynamics, our results reveal that sensory representations of braille letters are relevant for behavior earlier than perceptual ones. Interestingly, the similarity between perceptual braille letter representations and behavioral similarity ratings emerges in a time window in which transcranial magnetic stimulation (TMS) over the VWFA affects braille letter reading in sighted braille readers 77. This implies that around 320-420ms after the onset of reading braille, visually deprived and sighted brains utilize braille letter representations for performing tasks such as letter identification. Applying a comparable TMS protocol not only to sighted but also non-sighted braille readers would elucidate whether this time window of behavioral relevance can be generalized to braille reading, independent of visual experience.
4.4 Conclusions
Our investigation of experience-based plasticity at its boundaries due to the loss of the visual modality reveals a nuanced picture of its potential and limits. On the one hand, our findings emphasize how plastic the brain is by showing that regions typically processing visual information adapt to represent perceptual braille letter information. On the other hand, our findings illustrate inherent limits of brain plasticity. Brain areas represent information from atypical inputs within the boundaries of their pre-existing computational capacity and the progression from sensory to perceptual transformations adheres to a common temporal scheme and functional role.
5. Methods
5.1 Participants
We conducted three separate experiments with partially overlapping participants: an fMRI, an EEG, and a behavioral experiment. All experiments were approved by the ethics committee of the Department of Education and Psychology of the Freie Universität Berlin and were conducted in accordance with the Declaration of Helsinki. 16 participants completed the fMRI experiment. One person was excluded due to technical problems during the recording, leaving a total of 15 participants in the fMRI experiment (mean age 39 years, SD=10, 9 female). 11 participants participated in the EEG experiment (N=11, mean age 44 years, SD=10, 8 female). The participant pools of the EEG and fMRI experiments overlapped by five participants. Out of a total of 21 participants, 19 participants (excluding one fMRI and one EEG participant) completed an additional behavioral task in which they rated the perceived similarity of braille letter pairs. All participants were blind since birth or early childhood (≤ 3 years, for details see Supplementary Table 1). All participants provided informed consent prior to the studies and received a monetary reward for their participation.
5.2 Experimental stimuli and design
In all experiments, we presented braille letters (B, C, D, L, M, N, V, Z; Figure 2a) to the left and right index fingers of participants using piezo-actuated refreshable braille cells (https://metec-ag.de/index.php) with two modules of 8 pins each. We only used the top 6 pins from each module to present letters from the braille alphabet. The modules were taped to the clothes of a participant for the fMRI experiment and on the table for the EEG and behavioral experiment. This way, participants could read in a comfortable position with their index fingers resting on the braille cells to avoid motion confounds. We instructed participants to read letters regardless of whether the pins stimulated their right or left index finger. We presented all eight letters to both hands, resulting in 16 experimental conditions (8 letters × 2 hands). In addition, two braille letters (E, O) were included as catch stimuli and participants were instructed to respond to them by pressing a button (fMRI) or pedal (EEG) with their foot. Catch trials were excluded from further analysis due to confounding motor and sensory signals.
5.3 Experimental procedures
5.3.1 fMRI experiment
The fMRI experiment consisted of two sessions. 15 participants completed the first fMRI session, during which we recorded a structural image (∼4 min.), a localizer run (7 min) and 10 runs of the main experiment (56 minutes). The total duration of the first session was 67 minutes excluding breaks. Eight of these 15 subjects completed a second fMRI recording session, in which we recorded an additional 15 runs of the main experiment (85 minutes). We did not record any structural images or localizer runs in the second session, resulting in a total duration of 85 minutes excluding breaks.
5.3.1.1 fMRI main experiment
During the fMRI main experiment, we presented participants with letters on braille cells and asked them to respond to occasionally appearing catch letters. We presented letters for 500ms, with a 2500ms inter-stimulus-interval (ISI; see Figure 2b top). Each regular trial - belonging to one of the 16 experimental conditions - was repeated 5 times per run (run duration: 337 s) in random order. Regular trials were interspersed every ∼20 trials with a catch trial, such that a catch trial occurred about once per minute. In addition, every 3rd to 5th trial (equally probable) was a null trial where no stimulation was given. In total, one run consisted of 80 regular trials, 5 catch trials and 22 null trials, amounting to a total of 107 trials per run.
To ensure that participants were able to read letters with both hands and understood the task instructions, participants first completed an experimental run outside the scanner.
5.3.1.2 fMRI localizer experiment
To define regions-of-interest (ROIs), we performed a separate localizer experiment prior to the main fMRI experiment with tactile stimuli in four experimental conditions: braille letters read with the left hand, braille letters read with the right hand, fake letters read with the left hand and fake letters read with the right hand. The letters presented in the braille conditions were 16 letters from the alphabet excluding the letters used in the main experiment. The stimuli in the fake letter conditions were 16 tactile stimuli that were each composed of 8 dots, deviating from the standard 6-dot configuration in the braille alphabet.
The localizer experiment consisted of a single run lasting 432s, comprising 5 blocks of presentation of braille letters left, braille letters right, fake letters left, fake letters right and blank blocks as baseline. Each stimulation block was 14.4s long, consisting of 18 different letters presentations (500ms on, 300ms off) including two one-back repetitions that participants were instructed to respond to by pressing a button with their foot. We presented stimulation blocks in random order and regularly interspersed them with blank blocks.
5.3.2 EEG experiment
The EEG experiment consisted of two sessions. 11 participants completed the first session and 8 of these participants completed a second session. The total duration of each session was 59 minutes excluding breaks.
The experimental setup was similar to that for fMRI but adapted to the specifics of EEG. We presented braille letters for 500ms with a 500ms ISI on regular trials. In catch trials, the letters were presented for 500ms with a 1100ms ISI to avoid contamination of movement on subsequent trials (see Figure 2b bottom). Each of the 16 experimental conditions was presented 170 times per session. Regular trials were interspersed every 5th to 7th trial (equally probable) with a catch trial. In total, one EEG recording session consisted of 2720 regular trials and 541 catch trials, amounting to a total of 3261 trials. 2 participants completed additional trials due to technical problems leading to a total of 190 and 180 repetitions per stimulus, accordingly.
Prior to the experiment, participants completed a short screening task during which each letter of the alphabet was presented for 500ms to each hand in random order. Participants were asked to verbally report the letter they had perceived to assess their reading capabilities with both hands using the same presentation time as in the experiment. The average performance for the left hand was 89% correct (SD = 10) and for the right hand it was 88% correct (SD = 13).
5.3.3 Behavioral letter similarity ratings
In a separate behavioral experiment, participants judged the perceived similarity of the braille letters used in the neuroimaging experiments. For this task, participants sat at a desk and were presented with two braille cells next to each other. Each pair of letters was presented once, and participants compared them with the same finger. The rating was without time constraints, meaning participants decided when they rated the stimuli. Participants were asked to verbally rate the similarity of each pair of braille letters on a scale from 1 = very similar to 7 = very different and the experimenter noted down their responses.
5.4 fMRI data acquisition, preprocessing and preparation
5.4.1 fMRI acquisition
We acquired MRI data on a 3-T Siemens Tim Trio scanner with a 12-channel head coil. We obtained structural images using a T1-weighted sequence (magnetization-prepared rapid gradient-echo, 1 mm3 voxel size). For the main experiment and the localizer run, we obtained functional images covering the entire brain using a T2*-weighted gradient-echo planar sequence (TR=2s, TE=30ms, flip angle=70°, 3 mm3 voxel size, 37 slices, FOV=192mm, matrix size=64×64, interleaved acquisition).
5.4.2 fMRI preprocessing
For fMRI preprocessing, we used tools from FMRIB’s Software Library (FSL, www.fmrib.ox.ac.uk/fsl). We excluded non-brain tissue from analysis using the Brain Extraction Tool (BET)78 and motion corrected the data using MCFLIRT 79. We did not apply high or low-pass temporal filters. We spatially smoothed fMRI localizer data with an 8mm FWHM Gaussian kernel. We registered functional images to the high-resolution structural scans and to the MNI standard template using FLIRT 80. We carried out all further fMRI analyses in MATLAB R2021a (www.mathworks.com).
5.4.3 Univariate fMRI analysis
For all univariate fMRI analyses, we used SPM12 (http://www.fil.ion.ucl.ac.uk/spm/). For the main experiment, we modelled the fMRI responses to the 16 experimental conditions for each run using a general linear model (GLM). The onsets and durations of each image presentation entered the GLM as regressors and were convolved with a hemodynamic response function (hrf). Six movement parameters (pitch, yaw, roll, x-, y-, z-translation) entered the GLM as nuisance regressors. For each of the 16 conditions we converted GLM parameter estimates into t-values by contrasting each parameter estimate against the implicit baseline. This resulted in 16 condition specific t-value maps per run and participant.
For the localizer experiment, we modelled the fMRI response to the 5 experimental conditions entering block onsets and durations as regressors of interest and movement parameters as nuisance regressors before convolving with the hrf. From the resulting three parameter estimates we generated two contrasts. The first contrast served to localize activations in primary (S1) and secondary (S2) somatosensory cortex and was defined as letters & fake letters > baseline. The second contrast served to localize activations in early visual cortex (EVC), V4, lateral occipital complex (LOC), letter form area (LFA), visual word form area (VWFA), anterior intra-parietal sulcus (aIPS), posterior intra-parietal sulcus (pIPS) and insula and was defined as letters > fake letters. In sum, this resulted in two t-value maps for the localizer run per participant.
5.4.4 Definition of regions-of-interest
To identify regions along the sighted reading and tactile processing pathway, we defined regions of interest (ROIs) in a two-step procedure. We first constrained ROIs by anatomical masks using brain atlases, in each case combining regions across both hemispheres. We included 5 ROIs from the sighted reading pathway: EVC (merging the anatomical masks of V1, V2 and V3), V4, LOC, LFA and the VWFA. We also included 5 ROIs from the tactile processing pathway: S1, S2, aIPS (merging the anatomical masks of IPS3, IPS4, IPS5), pIPS (merging the anatomical masks of IPS0, IPS1 and IPS2) and the insula. For EVC, V4, LOC, aIPS and pIPS, we used masks from the probabilistic Wang atlas 81. For LFA, we defined the mask using the MarsBaR Toolbox (https://marsbar-toolbox.github.io/) with a 10 mm radius around the center voxel at MNI coordinates X=-40; Y=-78 and Z=-18 27. We also defined the VWFA mask using the MarsBaR Toolbox with a 10 mm radius around the center voxel at MNI coordinates X=-44; Y=-57 and Z=-13 25 and converted from Talairach to MNI space using the MNI<->Talairach Tool (https://bioimagesuiteweb.github.io/bisweb-manual/tools/mni2tal.html). We created the mask for S1 by merging the sub-masks of BA1, BA2 and BA3 from the WFU PickAtlas (https://www.nitrc.org/projects/wfu_pickatlas/) and the mask for S2 by merging the sub-masks operculum 1-4 from the Anatomy Toolbox 82. Lastly, we extracted the mask for the insula from the WFU PickAtlas. The smallest mask included 321 voxels. Therefore, in a second step, we selected the 321 most activated voxels of the participant-specific localizer results within each of the masks, using the letters & fake letters > baseline contrast for S1 and S2 and the letters > fake letters for the remaining ROIs. This yielded participant-specific definitions for all ROIs.
5.5 EEG data acquisition and preprocessing
We recorded EEG data using an EASYCAP 64-channel system and a Brainvision actiCHamp amplifier at a sampling rate of 1,000 Hz. The electrodes were placed according to the standard 10-10 system. The data was filtered online between 0.03 and 100 Hz and re-referenced online to FCz.
We preprocessed data offline using the EEGLAB toolbox version 14 83. We incorporated a low-pass filter with a cut-off at 50 Hz and epoched trials between -100 ms and 999 ms with respect to stimulus onset, resulting in 1,100 1ms data points per epoch. We baseline-corrected the epochs by subtracting the mean of the 100ms prestimulus time window from the epoch. We re-referenced the data offline to the average reference. To clean the data from artifacts such as eye blinks, eye movements and muscular contractions, we used independent component analysis as implemented in the EEGLAB toolbox. We used SASICA 84 to guide the visual inspection of components for removal. We identified components related to horizontal eye movements using two lateral frontal electrodes (F7-F8). During five recordings (1 participant first session, 4 participants second session), additional external electrodes were available that allowed for the direct recording of the horizontal electro-oculogram to identify and remove components related to horizontal eye movements. For blink artifact detection based on the vertical electro-oculogram, we used two frontal electrodes (Fp1 and Fp2). As a final step, we applied multivariate noise normalization to improve the signal-to-noise ratio (SNR) and reliability of the data 39, resulting in subject-specific trial-based time courses of electrode activity.
5.6 Braille letter classification from brain measurements
To determine the amount of information about braille letter identity present in brain measurements, we used a multivariate classification scheme 12–15. We conducted subject-specific braille letter classification in two ways. First, we classified between letter pairs presented to one reading hand, i.e., we trained and tested a classifier on brain data recorded during the presentation of braille stimuli to the same hand (either the right or the left hand). This yields a measure of hand-dependent braille letter information in neural measurements. We refer to this analysis as within-hand classification. Second, we classified between letter pairs presented to different hands in that we trained a classifier on brain data recorded during the presentation of stimuli to one hand (e.g., right), and tested it on data related to the other hand (e.g., left). This yields a measure of hand-independent braille letter information in neural measurements. We refer to this analysis as across-hand classification.
All classification analyses were carried out in MATLAB R2021a (www.mathworks.com) and relied on binary c-support vector classification (C-SVC) with a linear kernel as implemented in the libsvm toolbox 85 (https://www.csie.ntu.edu.tw/cjlin/libsvm). Furthermore, all analyses were conducted in a participant specific manner. The next section describes the multivariate fMRI and EEG analyses in more detail.
5.6.1 Spatially resolved multivariate fMRI analysis
We conducted both a ROI-based and a spatially unbiased volumetric searchlight procedure 86. For each ROI included in the ROI-based analysis, we extracted and arranged t-values into pattern vectors for each of the 16 conditions and experimental runs. If participants completed only one session, the analysis was conducted on 10 runs. If participants completed both sessions, the 10 runs from session 1 and 15 runs from session 2 were pooled and the analysis was conducted across 25 runs. To increase the SNR, we randomly assigned run-wise pattern vectors into bins and averaged them into pseudo-runs. For participants with one session, the bin size was 2 runs, resulting in 5 pseudo-runs. If participants completed 2 sessions and thus had 25 runs, the bin size was 5 runs resulting in 5 pseudo-runs. Thus, in both cases, each participant ended up with five pseudo-run pattern vectors that entered the classification analysis. We then performed 5-fold leave-one-pseudo-run-out-cross validation, training on 4 and testing on one pseudo-trial per classification iteration.
We will first describe the classification procedure for braille letters within-hand and then for the classification of braille letters across-hand.
5.6.2 fMRI ROI-based classification of braille Letters within-hand
For the classification of braille letters within-hand, we assigned four pseudo-trials corresponding to the data from two braille letters of the same hand (e.g., right) to the training set. We then tested the SVM on the remaining, fifth pseudo-trial corresponding to data from the same two braille letters of the same hand (e.g., right) as in the training set but using held-out data for the testing set. This yielded percent classification accuracy (50% chance level) as output. Equivalent SVM training and testing was repeated for all combinations of letter pairs within each hand.
With 8 letters that were all classified pairwise once per hand, this resulted in 28 pairwise classification accuracies per hand. We averaged accuracies across condition pairs and hands, yielding a measure of hand-dependent braille letter information for each ROI and participant separately
5.6.3 fMRI ROI-based classification of braille Letters across-hand
The classification procedure of braille letters across reading hands was identical to the classification procedure within-hand with the important difference that the training data always came from one hand (e.g., right) and the testing data from the other hand (e.g., left).
With 8 letters that were all classified pairwise once across two hands, this resulted again in 28 pairwise classification accuracies across-hand per training-testing direction (i.e., train left, test right and vice versa). We averaged accuracies across condition pairs and training-testing directions, yielding a measure of hand-independent braille letter information for each ROI and participant separately.
5.6.4 fMRI searchlight classification of braille Letters
The searchlight procedure was conceptually equivalent to the ROI-based analysis. For each voxel vi in the 3D t-value maps, we defined a sphere with a radius of 4 voxels centered around voxel vi. For each condition and run, we extracted and arranged the t-values for each voxel of the sphere into pattern vectors. Classification of braille letters across-hand proceeded as described above. This resulted in one average classification accuracy for voxel vi. Iterated across all voxels this yielded a 3D volume of classification accuracies across the brain for each participant separately.
5.6.5 Time-resolved classification of braille letters within-hand from EEG data
To determine the timing with which braille letter information emerges in the brain, we conducted time-resolved EEG classification 15,33. This procedure was conceptually equivalent to the fMRI braille letter classification in that it classified letter pairs either within or across-hand and was conducted separately for each participant.
For each time point of the epoched EEG data, we extracted 63 EEG channel activations and arranged them into pattern vectors for each of the 16 conditions. Participants who completed one session had 170 trials per condition and participants who completed two sessions had 340 trials per condition. To increase the SNR, we randomly assigned the trials into bins and averaged them into new pseudo-trials. For participants with one session, the bin size was 34 trials, resulting in 5 pseudo-trials. If participants completed 2 sessions and thus had 340 trials, the bin size was 68 trials resulting in 5 pseudo-trials. In both cases, each participant ended up with five pseudo-run pattern vectors that entered the classification analysis. We then performed 5-fold leave-one-pseudo-run-out-cross validation, training on 4 and testing on one pseudo-trial per classification iteration. This procedure was repeated 100 times with random assignment of trials to pseudo-trials, and across all combinations of letter pairs and hands. We averaged results across condition pairs, folds, iterations and hands, yielding a decoding accuracy time course reflecting how much hand-dependent braille letter information was present at each time point in each participant.
5.6.6 Time-resolved classification of braille letters across-hand from EEG data
The classification procedure for braille letters across-hand was identical to the classification of braille letters within-hand with the crucial difference that training and testing data always came from separate hands and results were averaged across condition pairs, folds, iterations and training-testing directions. Averaging results yielded a decoding accuracy time course reflecting how much hand-independent braille letter information was present at each time point in each participant.
5.6.7 Time-resolved EEG searchlight in sensor space
We conducted an EEG searchlight analysis resolved in time and sensor space (i.e., across 63 EEG channels) to gain insights into which EEG channels contributed to the results of the time-resolved analysis described above. For the EEG searchlight, we conducted the time-resolved EEG classification as described above with the following difference: For each EEG channel ci, we conducted the classification procedure on the four closest channels surrounding c. The classification accuracy was stored at the position of c. After iterating across all channels and down-sampling the time points to a 10ms resolution, this yielded a classification accuracy map across all channels and time points in 10ms steps for each participant.
5.7 Representational similarity analysis of brain data and behavioral letter similarity ratings
To determine the subset of neural braille letter representations identified that is relevant for behavior 16–19, we compared perceptual letter similarity ratings to braille letter representations identified from EEG and fMRI signals using representational similarity analysis (RSA) 38. RSA characterizes the representational space of a measurement space (e.g., fMRI or EEG data) with a representational dissimilarity matrix (RDM). RDMs aggregate pairwise distances between responses to all experimental conditions, thereby abstracting from the activity patterns of measurement units (e.g., fMRI voxels or EEG channels) to between-condition dissimilarities. The rationale of the approach is that neural measures and behavior are linked if their RDMs are similar.
We constructed RDMs for behavior, fMRI and EEG as follows.
For behavior, we arranged the perceptual similarity judgments averaged across participants (indicated by participants on a scale from 1 = very similar to 7 = very different) into an RDM format. All RDMs were averaged over both hands and had the dimensions 8 letters × 8 letters.
For both fMRI and EEG, we used the classification results from the conducted within-hand and across-hand classifications as a measure of (dis-)similarity relations between braille letters. Classification accuracies can be interpreted as a measure of dissimilarity because two conditions have a higher classification accuracy when they are more dissimilar32,39. Thus, we assembled participant-specific RDMs for each fMRI ROI and EEG time point in the time course from decoding accuracies.
In a final step we correlated (Spearman’s R) the lower triangular part of the respective RDMs (without the diagonal 84). For fMRI, this resulted in one correlation value per participant and ROI. For EEG, this analysis resulted in one correlation time course per participant.
5.8 Statistical testing
5.8.1 Wilcoxon signed-rank test
We performed non-parametric one-tailed Wilcoxon signed-rank tests to test for above-chance classification accuracy for ROIs in the fMRI classification, for time points in the EEG classification, for time points and channels in the EEG searchlight, and for ROI and time courses in the RSA. In each case the null hypothesis was that the observed parameter (i.e., classification accuracy, correlation) came from a distribution with a median of chance level performance (i.e., 50% for pairwise classification; 0 correlation). The resulting P-values were corrected for multiple comparisons using the false discovery rate (FDR) 87 at 5% level if more than one test was conducted. This was done a) across ROIs in the ROI classification and fMRI-behavior RSA, b) across time points in the EEG classification and EEG-behavior RSA, and c) across time points and channels in the EEG searchlight.
5.8.2 Bootstrap tests
We used bootstrapping to compute 95% confidence intervals for onset latencies (the first 50 consecutive significant timepoints after trial onset) of EEG time courses as well as for determining the significance of onset latencies. In each case we sampled the participant pool 1,000 times with replacement calculated the statistic of interest for each sample.
For the EEG onset latency differences, we bootstrapped the latency difference between the onsets of the time courses of hand-dependent or hand-independent letter representations. This yielded an empirical distribution that could be compared to zero. To determine whether onset latencies differences in the EEG time courses were significantly different from zero, we computed the proportion of values that were equal to or smaller than zero and corrected them for multiple comparisons using FDR at P=0.05.
5.8.3 Other statistical tests
For the fMRI searchlight classification results, we applied a voxel-wise height threshold of P= 0.001. The resulting P-values were corrected for multiple comparisons using the family-wise error (FWE) at the 5% level.
Acknowledgements
We thank all of our participants for taking part in our experiments. We also thank Agnessa Karapetian, Johannes Singer and Siying Xie for their valuable comments on the manuscript. We acquired EEG and fMRI data at the Center for Cognitive Neuroscience (CCNB), Freie Universität Berlin, and we thank the HPC Service of ZEDAT, Freie Universität Berlin, for computing time 1.
5.9 Data availability
The raw fMRI and EEG data are available on OpenNeuro via https://openneuro.org/datasets/ds004956 and https://openneuro.org/datasets/ds004951/. The preprocessed fMRI, EEG and behavioral data as well as the results of the ROI classification, time classification and RSA can be accessed on OSF via https://osf.io/a64hp/.
5.10 Code availability
The code used in this study is available on Github via https://github.com/marleenhaupt/BrailleLetterRepresentations/.
References
- 1.Curta: A General-purpose High-Performance Computer at ZEDATFreie Universität Berlin
- 2.The plastic human brain cortexAnnu. Rev. Neurosci 28:377–401
- 3.The sensory-deprived brain as a unique tool to understand brain development and functionNeurosci. Biobehav. Rev 108:78–82
- 4.Evidence from Blindness for a Cognitively Pluripotent CortexTrends Cogn. Sci 21:637–648
- 5.A multimodal language region in the ventral visual pathwayNature 394:274–277
- 6.Recognition memory for Braille or spoken words: An fMRI study in early blindBrain Res 1438:22–34
- 7.Orthographic Priming in Braille Reading as Evidence for Task-specific Reorganization in the Ventral Visual Cortex of the Congenitally BlindJ. Cogn. Neurosci 31:1065–1078
- 8.A ventral visual stream reading center independent of visual experienceCurr. Biol 21:363–368
- 9.Neural networks for Braille reading by the blindBrain 121:1213–1229
- 10.On the functionality of the visually deprived occipital cortex in early blind personsNeurosci. Lett 124:256–259
- 11.Activation of V1 by Braille reading in blind subjectsNature 380:526–528
- 12.Probing principles of large-scale object representation: Category preference and location encodingHum. Brain Mapp 34:1636–1651
- 13.Encoding the identity and location of objects in human LOCNeuroimage 54:2297–2307
- 14.Spatial coding and invariance in object-selective cortexCortex 47:14–22
- 15.The dynamics of invariant object recognition in the human visual systemJ. Neurophysiol 111:91–102
- 16.The spatiotemporal neural dynamics underlying perceived similarity for real-world objectsNeuroimage 194:12–24
- 17.The temporal evolution of conceptual object representations revealed through models of behavior, semantics and deep neural networksNeuroimage 178:172–182
- 18.Human object-similarity judgments reflect and transcend the primate-IT object representationFront. Psychol 4:1–22
- 19.Unique semantic space in the brain of each beholder predicts perceived similarityProc. Natl. Acad. Sci. U. S. A 111:14565–14570
- 20.Somatosensory processes subserving perception and actionBehav. Brain Sci 30:189–201
- 21.Untangling invariant object recognitionTrends Cogn. Sci 11:333–341
- 22.How does the brain solve visual object recognition?Neuron 73:415–434
- 23.Separate visual pathways for perception and actionTrends Neurosci 15:20–25
- 24.The neural code for written words: A proposalTrends Cogn. Sci 9:335–341
- 25.The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patientsBrain 123:291–307
- 26.Unique, Shared, and Dominant Brain Activation in Visual Word Form Area and Lateral Occipital Complex during Reading and Picture NamingNeuroscience 481:178–196
- 27.Sequential then interactive processing of letters and words in the left fusiform gyrusNat. Commun 3:1–8
- 28.The visual word form area: Expertise for reading in the fusiform gyrusTrends Cogn. Sci 7:293–299
- 29.The unique role of the visual word form area in readingTrends Cogn. Sci 15:254–262
- 30.Structure and Function of the Human InsulaJ. Clin. Neurophysiol 34:300–306
- 31.Representational dynamics of object vision: The first 1000 msJ. Vis 13:1–19
- 32.Resolving human object recognition in space and timeNat. Neurosci 17:455–462
- 33.High temporal resolution decoding of object position and categoryJ. Vis 11:1–17
- 34.Cochlea to categories: The spatiotemporal dynamics of semantic auditory representationsCogn. Neuropsychol 38:468–489https://doi.org/10.1080/02643294.2022.2085085
- 35.Is neuroimaging measuring information in the brain?Psychon. Bull. Rev 23:1415–1428
- 36.Category Selectivity in the Ventral Visual Pathway Confers Robustness to Clutter and Diverted AttentionCurr. Biol 17:2067–2072
- 37.Only some spatial patterns of fMRI response are read out in task performanceNat. Neurosci 10:685–686
- 38.Representational similarity analysis - connecting the branches of systems neuroscienceFront. Syst. Neurosci 2:1–28
- 39.Multivariate pattern analysis for MEG: A comparison of dissimilarity measuresNeuroimage 173:434–447
- 40.Decoding mental states from brain activity in humansNat. Rev. Neurosci 7:523–534
- 41.Decoding Natural Sounds in Early “Visual” Cortex of Congenitally Blind IndividualsCurr. Biol 30:3039–3044
- 42.Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blindElife 9:1–33
- 43.Adaptive changes in early and late blind: A fMRI study of verb generation to heard nounsJ. Neurophysiol 88:3359–3371
- 44.Dissociating cortical regions activated by semantic and phonological tasks: A fMRI study in blind and sighted peopleJ. Neurophysiol 90:1965–1982
- 45.Speech processing activates visual cortex in congenitally blind humansEur. J. Neurosci 16:930–936
- 46.Language processing in the occipital cortex of congenitally blind adultsProc. Natl. Acad. Sci. U. S. A 108:4429–4434
- 47.“Visual” cortex of congenitally blind adults responds to syntactic movementJ. Neurosci 35:12859–12868
- 48.Ultra-fast speech comprehension in blind subjects engages primary visual cortex, fusiform gyrus, and pulvinar - a functional magnetic resonance imaging (fMRI) studyBMC Neurosci 14
- 49.Language and nonverbal auditory processing in the occipital cortex of individuals who are congenitally blind due to anophthalmiaNeuropsychologia 173
- 50.Semantic coding in the occipital cortex of early blind individualsbioRxiv 539437https://doi.org/10.1101/539437
- 51.Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blindNat. Neurosci 6:758–766
- 52.Visual Cortex Activity in Early and Late Blind PeopleJ. Neurosci 23:4005–4011
- 53.Spoken language processing activates the primary visual cortexPLoS One 18:1–22
- 54.The metamodal organization of the brainProg. Brain Res 134:427–445
- 55.Against cortical reorganisationeLife 12
- 56.The large-scale organization of ‘visual’ streams emerges without visual experienceCereb. Cortex 22:1698–1709
- 57.Reading Braille by Touch Recruits Posterior Parietal CortexJ. Cogn. Neurosci 35:1593–1616
- 58.Visuo-haptic object-related activation in the ventral visual pathwayNat. Neurosci 4:324–330
- 59.How does visual experience shape representations and transformations along the ventral stream?CCN Gener. Advers. Collab
- 60.Within-digit functional parcellation of brodmann areas of the human primary somatosensory cortex using functional magnetic resonance imaging at 7 teslaJ. Neurosci 32:15815–15822
- 61.Neural systems for tactual memoriesJ. Neurophysiol 75:1730–1737
- 62.Attending to and Remembering Tactile StimuliJ. Clin. Neurophysiol 17:575–591
- 63.Neural Substrates of Tactile Object Recognition: An fMRI StudyHum. Brain Mapp 21:236–246
- 64.The functional anatomy of visual-tactile integration in man: a study using positron emission tomographyNeuropsychologia 38:115–124
- 65.Cross-modal transfer of information between the tactile and the visual representations in the human brain: A positron emission tomographic studyJ. Neurosci 18:1072–1084
- 66.Task-specific recruitment of dorsal and ventral visual areas during tactile perceptionNeuropsychologia 42:1079–1087
- 67.The claustrum/insula region integrates conceptually related sounds and picturesNeurosci. Lett 422:77–80
- 68.How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstructionHum. Brain Mapp 40:5056–5068
- 69.The neural dynamics of letter perception in blind and sighted readersJ. Vis 15
- 70.A thalamocortical pathway for fast rerouting of tactile information to occipital cortex in congenital blindnessNat. Commun 10:1–9
- 71.Braille in the sighted: Teaching tactile reading to sighted adultsPLoS One 11:1–13
- 72.How many words do we read per minute? A review and meta-analysis of reading rateJ. Mem. Lang 109
- 73.A large inclusive study of human listening ratesConf. Hum. Factors Comput. Syst. - Proc :1–12
- 74.Auditory and auditory-tactile processing in congenitally blind humansHear. Res 258:165–174
- 75.Event-related potentials during auditory and somatosensory discrimination in sighted and blind human subjectsCogn. Brain Res 4:77–93
- 76.Central Auditory Skills In Blind And Sighted SubjectsScand. Audiol 20:19–23
- 77.Functional hierarchy for tactile processing in the visual cortex of sighted adultsNeuroimage 202
- 78.Fast robust automated brain extractionHum. Brain Mapp 17:143–155
- 79.Improved optimization for the robust and accurate linear registration and motion correction of brain imagesNeuroimage 17:825–841
- 80.A global optimisation method for robust affine registration of brain imagesMed. Image Anal 5:143–156
- 81.Probabilistic maps of visual topography in human cortexCereb. Cortex 25:3911–3931
- 82.A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging dataNeuroimage 25:1325–1335
- 83.EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysisJ. Neurosci. Methods 134:9–21
- 84.A practical guide to the selection of independent components of the electroencephalogram for artifact correctionJ. Neurosci. Methods 250:47–63
- 85.Libsvm: A library for support vector machinesACM Trans. Intell. Syst. Technol 2:1–27
- 86.Information-based functional brain mappingProc. Natl. Acad. Sci 103:3863–3868
- 87.Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple TestingJ. R. Stat. Soc. Ser. B 57:289–300
Article and author information
Author information
Version history
- Preprint posted:
- Sent for peer review:
- Reviewed Preprint version 1:
- Reviewed Preprint version 2:
Copyright
© 2024, Haupt et al.
This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.
Metrics
- views
- 325
- downloads
- 24
- citations
- 0
Views, downloads and citations are aggregated across all versions of this paper published by eLife.