Introduction

The primary somatosensory cortex (S1) is integral to the encoding of tactile information, processing sensory inputs from various regions of the body (Delhaye et al., 2018; Serino, 2019; Di Plinio et al., 2020; Piras et al., 2020). This cortical area is crucial for our sense of touch, facilitating the perception and interpretation of sensations such as pressure, vibration, temperature, and pain (Bushnell et al., 1999; Luna et al., 2005; Moulton et al., 2012). These external stimuli are represented as neural activity in S1, enabling animals to differentiate among distinct sensory experiences (Koch and Fuster, 1989; Salinas et al., 2000; Goodwin and Wheat, 2004; Bensmaia et al., 2008). However, the role of S1 extends beyond the mere reception of tactile signals. The neural representations within S1 are characterized by dynamic and flexible properties, shaped not only by feedforward mechanisms but also by factors such as attention, motivation, and cross-modal influences (Driver and Spence, 2000; Eimer and Forster, 2003; Schürmann et al., 2004; Butler et al., 2012; Ziegler et al., 2023). Such plasticity allows for adaptive responses to an ever-changing environment, which is critical for survival (Dijkerman and de Haan, 2007; Abraira and Ginty, 2013). A key manifestation of this adaptability is cross-modal processing, wherein inputs from other sensory modalities, such as visual or auditory information, can modulate tactile perception (Hopkins et al., 2017; Nikbakht et al., 2018; Sugiyama et al., 2019; Bulusu and Lazar, 2024). These findings suggest that S1 is not exclusively dedicated to tactile processing, but instead integrates and adapts sensory information from multiple modalities.

An example of cross-modal processing between S1 and other sensory modalities is the enhancement of tactile abilities in the absence or reduction of visual input. Blind individuals, particularly those who have been blind from an early age, consistently outperform sighted individuals on tasks requiring tactile discrimination, such as two-point discrimination and texture identification (Van Boven et al., 2000; Goldreich and Kanics, 2003; Norman and Bartholomew, 2011; Wong et al., 2011). This phenomenon is caused by repurposing of cortical areas normally devoted to vision or other senses to enhance processing of touch and other modalities (Rauschecker et al., 1992; Pascual-Leone and Torres, 1993; Sadato et al., 1996; Burton, 2003; Sathian and Stilla, 2010). Research indicates that during critical developmental periods, the visual cortex undergoes functional reorganization, and regions ordinarily involved in visual processing are repurposed to process stimuli from other sensory modalities, including tactile input (Sadato et al., 1996; Burton, 2003; Karlen et al., 2006). This reorganization is not immediate but rather occurs over extended periods (Karlen et al., 2006; Piché et al., 2007). This capacity for tactile enhancement results from both structural and functional plasticity, underscoring the brain’s remarkable ability to adapt and optimize sensory processing when one modality is lost or altered.

Interestingly, even sighted individuals can experience improvements in tactile performance when visual cues are removed or minimized, as evidenced by increased accuracy in tasks such as Braille reading under low-visibility conditions (Pascual-Leone and Hamilton, 2001; Kauffman et al., 2002; Facchini and Aglioti, 2003). These findings suggest that limiting or removing vision may trigger compensatory mechanisms that enhance the sense of touch, even in those with typical vision (Boroojerdi et al., 2000; Merabet et al., 2008; Bola et al., 2017). While much of the research on blindness focuses on the extended plasticity that occurs in response to sensory deprivation—such as the reorganization of somatosensory maps and the recruitment of visual regions for tactile processing—these changes typically unfold over long durations or during critical developmental periods. Thus, although the brain’s capacity to reorganize tactile representation in the absence of visual input is observed in both blind and sighted individuals, the underlying mechanisms behind long-term and short-term cross-modal processing are likely distinct. In blind individuals, these processes may involve more profound, lasting changes in neural circuitry, whereas in sighted individuals, the adaptations may be more transient and context-dependent. This suggests that different neural processes drive the change in somatosensory processing seen in response to visual deprivation, with the brain’s reorganization in sighted individuals occurring more rapidly and reversibly, while in blind individuals, it reflects more sustained and structural changes over time.

While behavioral evidence clearly enhanced tactile abilities in visually deprived individuals, the underlying neural mechanisms, particularly those occurring over short timescales in sighted individuals, remain poorly understood. How the somatosensory cortex might reorganize to enhance tactile perception within minutes or hours following visual deprivation remains unknown. fMRI studies have shown changes in cortical activity in short-term blindfolded individuals (Merabet et al., 2008; Siuda-Krzywicka et al., 2016; Bola et al., 2017), but how the neural representation of specific tactile stimuli is altered remains unclear. This gap in understanding arises because tactile representations in S1 are distributed across high-dimensional neural population codes, making it challenging for conventional methods to detect the subtle, stimulus-specific features embedded in these signals (Romo and Salinas, 2001; Pei et al., 2011; Lieber and Bensmaia, 2019). This highlights a gap in our understanding of whether—and how—S1’s population-level coding of tactile information may shift acutely when visual input is removed. Thus, more refined techniques and models are needed to investigate the immediate neural mechanisms underlying this form of cross-modal plasticity.

To address this gap, we focus on adult rats as a model for studying short-term cross-modal influences in S1. Rodents have well-defined and accessible somatosensory maps, enabling precise electrode placement for invasive recordings. In this study, we use local field potentials (LFPs) recorded from S1, which capture network-level dynamics not always apparent in single-unit spiking data or other imaging modalities. LFPs provide a rich, high-dimensional signal that can reflect both synchronous and asynchronous activity across cortical populations. By leveraging a machine learning framework, we can decode subtle differences in these signals—differences that might remain undetected through conventional amplitude-based analyses. Furthermore, to dissociate the effects of attention in the perception of the somatosensory stimulus (Tamè and Holmes, 2016), we created a custom behavioral paradigm that delivers the somatosensory stimuli to rats in the most naturalistic manner, while diverting the rat’s attention away from the stimuli (Yamashiro et al., 2024). This custom-designed treadmill paradigm allows us to manipulate both tactile (textured floors) and visual (light vs. dark) inputs independently, precisely controlling and isolating the contribution of visual cues to S1’s tactile coding.

We hypothesized that visual deprivation acutely alters the neural representation of tactile stimuli in S1, making texture-specific signals more distinct in the absence of visual input. To test this hypothesis, we analyzed high-dimensional LFP data using a convolutional neural network, a type of deep learning model, to examine how S1 encodes different floor textures under light and dark conditions. Our results reveal subtle, yet systematic shifts in neural coding that emerge in the absence of visual input. By demonstrating this rapid reorganization of tactile representations, we provide new insights into the mechanisms of cross-modal processing at short timescales. These findings hold broader implications for sensory rehabilitation and brain–machine interfaces, suggesting that leveraging the dynamic interplay between sensory modalities could lead to improved outcomes for individuals with sensory impairments.

Materials and Methods

Animal ethics

Animal experiments were performed with the approval of the Animal Experiment Ethics Committee at the University of Tokyo (approval numbers: P29–7 and P4–15) and according to the University of Tokyo guidelines for the care and use of laboratory animals. These experimental protocols were carried out following the Fundamental Guidelines for the Proper Conduct of Animal Experiments and Related Activities of the Academic Research Institutions (Ministry of Education, Culture, Sports, Science and Technology, Notice No. 71 of 2006), the Standards for Breeding and Housing of and Pain Alleviation for Experimental Animals (Ministry of the Environment, Notice No. 88 of 2006) and the Guidelines on the Method of Animal Disposal (Prime Minister’s Office, Notice No. 40 of 1995). While our experimental protocols have a mandate to humanely euthanize animals if they exhibit any signs of pain, prominent lethargy, and discomfort, such symptoms were not observed in any of the rats tested in this study. All efforts were made to minimize the animals’ suffering.

Behavioral paradigm

To record LFPs during natural locomotion, we employed a custom-designed, disk-shaped treadmill with a diameter of 90 cm (Figure 1A). The treadmill’s running surface was divided into two halves, each featuring a distinct texture: one side was coated with coarse sandpaper (grain #80) and the other with fine sandpaper (grain #1000). In this setup, the rat was placed on the treadmill and secured with a fabric vest (Figure 1B).

Behavioral paradigm and limb movement assessment with concurrent LFP recordings.

(A) A diagram of the disk-shaped treadmill used in the experiment. One half of the disk is covered with #80 sandpaper, and the other half with #2000 sandpaper. (B) A frame from the video capturing a walking rat from a left-side perspective. (C) The motivation scheme. The rats were water-deprived prior to the experiments. A water port was coupled with the movement of the treadmill so that when the rat walked on the treadmill, the water would come out. This way, the rats were always motivated to walk during the whole session. (D) The experimental protocol, where each rat walked for 10 minutes in light (50 lx) and then for 10 minutes in darkness (0 lx). (E) An example trajectory of the elbow and wrist joints from one session, plotted with the shoulder joint fixed in the coordinate space. (F) A schematic illustrating the forelimb and hindlimb subregions of S1. (G) A custom 32-channel electrode array used to record LFPs from these subregions.

Prior to the experiment, rats were water-restricted to ensure motivation. A waterspout was positioned in front of each rat, and its output was synchronized with the treadmill’s movement (Figure 1C). Specifically, once the treadmill began to move, 30 µL of water was dispensed from the spout, encouraging the rat to continue walking to receive its water reward.

Each rat was deprived of water until its body weight reached 85% of its baseline weight, then it was trained to walk on the treadmill for four days (one hour per day), with 30 minutes in a light environment (50 lx) followed by 30 minutes in darkness (0 lx). After successful training, the rats were allowed free access to food and water to regain their original body weight prior to electrode implantation surgery. Following surgical recovery, the rats were again water-restricted and placed on the treadmill to assess cross-modal interactions between the visual and tactile systems. In a single session, LFP recordings were obtained for 10 minutes in the light environment trial, followed immediately by 10 minutes in the dark environment trial (Figure 1D). Recording was performed once a day and performed daily for 4-14 days.

During the trials, the trajectory of the forelimb and onsets when the forelimb came in contact with the floor were automatically identified using a previously developed deep-learning–based method (Figure 1E) (Yamashiro et al., 2024).

Animal preparation and surgical procedures

LFPs were recorded from eleven 9-to 10-week-old Long-Evans rats (Japan SLC, Shizuoka, Japan) using a custom-designed, 32-channel electrode assembly. This assembly, fabricated from nichrome wires (761500, A-M Systems, WA, USA), targeted the right somatosensory cortex (S1) regions corresponding to the forelimb and hindlimb representations (Figure 1F). Specifically, 18 and 14 electrodes were placed in the forelimb and the hindlimb subregion respectively (Figure 1G). Each electrode tip was platinum-coated to reduce impedance to below 200 kΩ using a nanoZ tester (Plexon, TX, USA).

At the start of the surgical procedure, each rat was anesthetized with 2–3% isoflurane gas. A square craniotomy (2–6 mm posterior and 1–5 mm lateral to bregma) was then created using a dental drill. The electrode assembly was gently lowered through the cranial window to a depth of approximately 1.5 mm beneath the dura, targeting layer IV of S1. Additionally, two stainless steel screws were implanted in the bone above the cerebellum to serve as ground and reference electrodes. The recording device and electrodes were secured to the skull using stainless steel screws and dental cement. Following the surgery, each rat was housed individually in a transparent Plexiglas cage with ad libitum access to food and water for one week to ensure proper recovery.

LFP recordings from S1

LFPs were referenced to ground, digitized at 30 kHz using the OpenEphys recording system (http://open-ephys.org) and an RHD 32-channel headstage (C3314, Intan Technologies, CA, USA), then downsampled to 2 kHz for subsequent analyses (Yamashiro et al., 2020). Concurrently, video footage was acquired at 60 Hz using a USB camera module (MCM-303NIR, Gazo, Niigata, Japan), capturing a lateral view of the rat. Each video frame was synchronized with the neural recordings via strobe signals.

Data analysis

Data was analyzed offline using custom-made scripts in Python3. For box plots, the centerline shows the median, the box limits show the upper and lower quartiles, and the whiskers cover the 10−90% percentiles. P < 0.05 was considered statistically significant. All statistical tests were two-sided.

LFP analysis

To see if there were any differences in LFPs, the LFPs were aligned to the foot strike onsets detected using deep-learning assisted methods. The aligned LFP were then categorized by the trial (light vs. dark) and the floor texture (smooth vs. rough). To analyze the amplitude of the event-related response from the foot strike, a mean trace of the aligned LFP was calculated for each channel from 32 electrodes for each condition.

A deep neural network for joint decoding of trial conditions and floor texture

A custom deep neural network (DNN) model was implemented to predict both the trial condition (e.g., light vs. dark) and floor texture (e.g., smooth vs. rough) from time-aligned, one-dimensional LFP segments, using the PyTorch framework. Our model architecture was inspired by one-dimensional ResNet-like structures and incorporated multiheaded outputs for simultaneous prediction of two distinct variables(He et al., 2015).

The input to the model consisted of one-dimensional LFP signals, arranged as a tensor with multiple channels (e.g., 32 input channels) over time. The network began by splitting the input into two parallel convolutional pathways. The first pathway (“left” branch) applied sequential convolutional and pooling operations with relatively smaller kernel sizes and strides to incrementally reduce the dimensionality of the signal and extract fine-grained temporal features. Specifically, the model employed a two-stage convolutional process that first passed the input through a one-dimentional (1D) convolution layer with a kernel size of 7, stride of 2, and batch normalization, followed by a max pooling and an additional convolution layer. Both convolutional layers in the left branch used ReLU nonlinearities to facilitate stable and efficient feature extraction.

In contrast, the second pathway (“right” branch) processed the input through a single 1D convolutional layer with a larger kernel size (e.g., 41) and a more aggressive stride (e.g., stride of 8). This pathway captured broader temporal contexts from the input signals. Similar to the left branch, the right branch output was batch-normalized and passed through a ReLU activation function. After these two parallel extractions, the outputs of the left and right branches were concatenated along the channel dimension, forming a combined feature representation that integrated both fine- and coarse-grained temporal information.

The concatenated output was then passed through a max pooling operation, followed by two residual layers that employed 1D convolutional blocks (ResidualBlock) to refine feature representations. These residual layers allowed the network to learn more complex feature hierarchies by facilitating the flow of gradients during training and improving convergence, while also maintaining temporal resolution appropriate for downstream decoding.

Subsequently, the processed features were passed through an average pooling layer to summarize temporal information into a low-dimensional feature vector. This vector was flattened into a one-dimensional representation and then fed into two separate fully connected “heads” (fc_head1 and fc_head2). Each head was a simple linear layer that provided a scalar output value. By concatenating these outputs, the final layer jointly predicted two target variables from the same underlying features.

In summary, our model combined parallel convolutional branches for initial feature extraction, residual layers for robust representation learning, and multiheaded outputs to facilitate joint prediction of trial conditions and floor textures. The model was implemented in Python using PyTorch, and all parameters were optimized via standard stochastic gradient–based methods. This architecture allowed efficient and robust decoding of environmental conditions from LFP signals in both time and frequency domains.

Training and evaluating the deep neural network

Of the 11 recorded rats, data from the initial two rats were utilized to determine the optimal model architecture. Once the architecture was established, data from the remaining 9 rats were used for training and evaluation. The raw LFP signals were downsampled to 10,000 Hz and segmented into 800 ms windows centered on the footstrike onset (e.g., 400 ms before and 400 ms after). Each LFP segment was assigned two labels: one for the trial condition (e.g., light vs. dark) and one for the floor texture (e.g., smooth vs. rough).

For each rat, the dataset was shuffled, normalized, and then split into training (80%) and evaluation (20%) subsets. Using the training dataset, model training was performed for 80 epochs using a batch size of 128 and a learning rate of 10-6. Binary cross entropy with logits loss (BCEWithLogitsLoss) served as the objective function. These parameters were selected to ensure stable convergence and to reduce overfitting. After the training, the model performance was evaluated using the evaluation dataset. The confusion matrix was calculated as the number of true positives, false positives, false negatives, and true negatives, aggregated across all predictions in the evaluation set.

Explainability analysis using occlusion and integrated gradients

To elucidate the internal decision-making processes of the trained deep neural network (DNN), we employed two established explainability techniques: occlusion and integrated gradients.

Occlusion: Occlusion analysis involves systematically masking specific input features to determine their relative contribution to the model’s output (Zeiler and Fergus, 2013). In the present study, one channel was selectively occluded at a time from the 32-channel LFP input and the sensitivity of each channel was calculated. Sensitivity was defined as the corresponding change in model performance when the specified channel was occluded. By conducting these analyses for each of the nine rats, channel-specific importance scores were obtained and subsequently normalized (z-scored) to facilitate cross-subject comparisons. Channels whose removal yielded a more pronounced decrease in model performance were considered more critical for accurate prediction.

Integrated gradients: Integrated gradients is an attribution method that quantifies the importance of each input feature by integrating the gradient of the model’s output with respect to the input, transitioning from a baseline input to the actual input (Sundararajan et al., 2017). This approach produces class activation maps, enabling the visualization of features most influential for the model’s output. Here, integrated gradients were applied to the LFP segments for each rat, and the resulting class activation maps were averaged across subjects and trial conditions. These maps allowed us to identify salient input regions associated with both trial conditions and floor texture.

Results

Stable locomotion across light and dark conditions

To investigate how visual input influences tactile processing in S1, we devised an experimental paradigm where both tactile and visual inputs were independently manipulated. Rats were placed on a disk-shaped treadmill with two distinct sandpaper textures, and LFPs were recorded from walking rats. Each rat walked for 10 min in a light environment (50 lx) and then for 10 minutes in total darkness (0 lx). To make sure that rat’s trajectory was stable across different floor textures and environmental conditions, gait parameters were extracted from the trajectories using deep-learning– based analysis (Figure 2A). From the trajectories, swing duration, stance duration, stride length, and footstrike speed were extracted. All parameters were calculated for each floor textures and environmental conditions. Comparison of all conditions revealed that none of these metrics differed significantly between floor textures or environmental conditions, indicating that overall locomotion remained stable (Figure 2B-E, P > 0.05, one-way analysis of variance (ANOVA) followed by Tukey‒Kramer post hoc test, n = 149, 149, 107 and 107 trials for smooth-light, rough-light, smooth-dark, and rough-dark, respectively). Thus, any differences in neural activity under these conditions are unlikely to be driven by altered motor behavior.

Comparison of gait parameters across textures and environmental conditions.

(A) Swing phase vs. stance phase, illustrated with video frames (left: swing, right: stance). (B) Normalized swing duration measured for each rat under different textures (smooth vs. rough) and environmental condition (light vs. dark). There were no significant differences among trial conditions. (C–E) Stance duration, stride length, and footstrike speed, respectively, under the same conditions as in B. None of these parameters differed significantly across texture types or lighting conditions.

Characteristic negative deflections in S1 LFP upon forelimb contact

We next analyzed LFPs from S1 using a custom 32-channel electrode array targeting the forelimb and hindlimb subregions (Figure 1F, G). LFP traces were first aligned to forelimb contacts with the disk surface. On inspection of a single LFP trace, no apparent event-related response could be observed (Figure 3A). However, aligning each LFP at forelimb contact with the floor, the average trace showed a clear response after the onset (Figure 3B). Across conditions, we observed a clear negative deflection in LFPs that was most pronounced in the forelimb subregion (Figure 3C). The amplitude of the response was significantly larger in all rats, consistent with topographic specificity of S1 (Figure 3D, P = 1.89×10-3, t10 = -2.2, paired t-test, n = 11 rats).

LFP recordings in rat S1 during walking.

(A) A single representative LFP trace aligned to a forelimb contact. (B) An example of an averaged LFP trace from one session, aligned to forelimb contact. (C) The electrode montage and averaged LFP at each electrode. Left: The electrode montage showing all 32 recording sites. Right: Averaged LFP signals aligned to forelimb contacts with the floor, shown for each electrode depicted in the left panel. (D) Comparison of amplitudes between hindlimb and forelimb subregions, aggregated across all 11 rats. P = 1.89×10-3, t10 = -2.2, paired t-test, n = 11 rats. (E) Comparison of averaged amplitudes within the same trial for different floor textures and environmental conditions. P > 0.05, one-way analysis of variance (ANOVA) followed by Tukey‒Kramer post hoc test, n = 149, 149, 107 and 107 trials for smooth-light, rough-light, smooth-dark, and rough-dark, respectively. (F) Cumulative probability distributions of mean amplitude from each session, compared across different textures. P = 4.84×10-1, 8.35×10-1, D = 1.03×10-1 and 7.54×10-2 for light and dark environments respectively, two-sample Kolmogorov‒ Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively. (G) Cumulative probability distributions of mean amplitude from each session, compared across light and dark environments. P = 1.05×10-1, 1.83×10-1, D = 0.14 and 0.149 for smooth and rough textures respectively, two-sample Kolmogorov‒Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively. Abbreviations: ERP, event-related potential; LFP, local field potential; S1, primary somatosensory cortex.

To see if the event related responses were affected by the change in floor texture or the environmental condition, the amplitudes were compared. Despite the prominent negative deflection in LFPs at the forelimb contact with the floor, analyses showed no substantial differences in signal amplitude (Figure 3E, P > 0.05, one-way analysis of variance (ANOVA) followed by Tukey‒Kramer post hoc test, n = 149, 149, 107 and 107 trials for smooth-light, rough-light, smooth-dark, and rough-dark, respectively) when comparing rough vs. smooth textures (Figure 3F, P = 0.48 and 0.84, D = 1.03×10- 1 and 7.54×10-2 for light and dark environments respectively, two-sample Kolmogorov‒Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively) and light vs. dark environments (Figure 5G, P = 0.11, 0.18, D = 0.14 and 0.149 for smooth and rough textures respectively, two-sample Kolmogorov‒Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively). These findings suggest that simple amplitude metrics may fail to capture more subtle or higher-dimensional differences in the underlying neuronal population activity.

Model-based prediction of texture and environmental conditions from LFP.

(A) The deep learning model architecture. The LFP input is processed through two parallel pathways for macro- and micro-scale feature extraction, followed by residual blocks that feed into two output heads for floor texture (Smooth vs. Rough) and the environmental condition (Light vs. Dark). (B) Training performance for a single representative rat. The left graph shows accuracy curves for texture (blue) and lighting (yellow), and the right graph shows the loss curves. (C) Testing performance for the same rat. The model exhibits good generalization, as accuracy increases and loss decreases on held-out data. (D) Confusion matrix for texture classification for all rats. Values above chance on the diagonal indicate successful texture prediction. Note that all values in the same row add up to 1. (E) Same as D, but for environmental conditions. (F) Combined confusion matrix for texture and trial predictions. The model performs well on both tasks across all rats. Abbreviations: avgpool, average pooling layer; conv, convolutional layer; maxpool, max pooling layer; LFP, local field potential

Neural representations become distinct in dark environments than in light.

(A) A 912-dimensional feature vector is extracted from the layer preceding the final output. (B) A scatter plot of these features from one rat shows individual LFP segments (aligned to forelimb contact). Light orange and dark orange correspond to the light conditions (smooth, rough), while light blue and dark blue correspond to the dark conditions (smooth, rough). (C) Silhouette scores across all nine rats, showing that the dark condition yields higher scores and thus more distinct neural representations. P = 4.83×10-2, t8 = -2.6, paired t-test, n = 9 rats. (D) A pseudo-color map based on occlusion analysis, illustrating the contribution of each electrode in the forelimb and hindlimb subregions. Hotter regions indicate higher importance for the model’s predictions. (E) Forelimb channels exhibit higher occlusion sensitivity than hindlimb channels, highlighting the forelimb’s dominant role when the foot contacts the floor. P = 4.53×10- 6, t16 = -5.57, Student’s t-test, n = 9 rats. (F) Activation maps generated via integrated gradients highlight key input features responsible for accurate model predictions of texture and environmental conditions. Activation scores show each feature’s impact on the model’s output relative to a reference baseline: high positive scores denote features that strongly affect the predicted class. The onset of forelimb contact is aligned to time zero. (G) Activation scores averaged over forelimb electrodes for floor texture (left) and the environmental conditions (right). A temporal lag in the dark condition suggests an extended processing window for floor texture when visual cues are absent.

Machine learning–based decoding of tactile and visual information

Given the lack of clear differences in averaged LFP features, we applied a convolutional neural network (CNN) to decode both the type of floor texture (smooth vs. rough) and the presence or absence of visual cues (light vs. dark). The model architecture incorporated parallel convolutional pathways to capture both macro- and micro-scale temporal features, followed by residual blocks, and then split into dual output layers to jointly predict texture and lighting conditions (Figure 4A).

When trained on 80% of the LFP data (segmented around footstrikes), the model exhibited stable learning curves (Figure 4B) and generalized well to held-out test data (Figure 4C). Confusion matrices for both texture and lighting classifications were above chance along the diagonal, demonstrating that the model reliably extracted neural representations of the floor textures from the LFPs (Figure 4D–F).

Neural representations become more distinct in darkness

To understand how absence of the visual cue might refine tactile processing, we performed explainability analyses on the model’s learned representations (Figure 5). We extracted a 912-dimensional feature vector from the layer preceding the final outputs, then visualized these high-dimensional embeddings with scatter plots (Figure 5A, B). Clustering analyses using silhouette scores showed that representations of texture were more separated in the dark environment, suggesting that reduced visual input enhances the distinctness of neural codes for tactile stimuli (Figure 5C, P = 4.83×10-2, t8 = -2.6, paired t-test, n = 9 rats).

To elucidate which electrodes held most information about the floor textures, we performed occlusion analysis. The results revealed that electrodes in the forelimb subregion contributed more to successful texture and lighting predictions (Figure 5D, E, P = 4.53×10-6, t8 = -5.57, Student’s t-test, n = 9 rats), which aligns with our observation that negative deflections in LFPs were largest in forelimb-targeting channels (Figure 2E). Further analysis using feature maps showed spatiotemporal patterns of salient features unique to each experimental condition (Figure 5F). In particular, the average activation in the forelimb subregion displayed a temporal shift in the dark condition, implying an extended processing window for texture information when visual cues are unavailable (Figure 5G).

Collectively, these results indicate that visual deprivation modifies population-level activity in S1, yielding more distinctive representations of tactile stimuli. Such reorganization could provide a neural substrate for enhanced tactile perception under conditions of reduced or absent visual input.

Discussion

This study provides evidence that S1 undergoes rapid and dynamic reorganization of tactile representations when visual input is removed, even over a short period of minutes. By combining high-density LFP recordings with advanced machine learning techniques, we revealed that the neural encoding of tactile stimuli becomes more distinct under conditions of visual deprivation. Notably, when rats walked in the dark condition, texture representations in S1 were more distinguishable than when visual inputs were present. These findings highlight the adaptability of S1 and underscore the brain’s ability to quickly adjust its tactile representations in response to changing sensory contexts.

Interestingly, our data indicated that forelimb-targeting electrodes played a crucial role in decoding both tactile textures and lighting conditions. First, the evoked potential aligned at forelimb contact with the floor was larger at electrodes in forelimb subregion of the S1. This finding is consistent with the topographic specificity of S1, where distinct cortical regions are dedicated to processing sensory information from different parts of the body (Sur et al., 1980; Ewert et al., 2008; Prsa et al., 2019). Moreover, the occlusion analysis revealed that these forelimb regions were particularly critical for distinguishing texture and lighting conditions, suggesting that the forelimb subregion of S1 plays a dominant role in encoding tactile information in this context. Although this result is intuitive, since LFPs were aligned at forelimb onset, this is the first to reveal that a distinct cortical region encodes “tactile information” at the specific part of the body. Visualization of the key channel and temporal features using integrated gradients revealed results similar to those of the occlusion analysis. Specifically, the forelimb channels were found to be particularly influential in encoding texture. Additionally, the temporal window immediately following forelimb contact proved critical for accurate predictions, with this temporal window being significantly longer in the dark condition compared to the light condition.

The observed temporal shift in the dark condition, where texture-specific processing extended over a longer time window, suggests that the absence of visual input may enhance the retention of tactile information in S1. Previous studies have shown that neurons in S1 are involved in the short-term retention of information (Zhou and Fuster, 1996, 1997, 2000), and higher-order sensory areas exhibit prolonged neuronal firing (Leavitt et al., 2017; Esmaeili and Diamond, 2019). This sustained activity may reflect enhanced feedback loops within S1 and also from higher cortical regions. The temporal shift observed in our study could thus indicate sustained neural representations, possibly as a result of visual deprivation leading to a more prolonged representation of tactile stimuli. Whether this shift reflects a compensatory mechanism for the lack of visual input or directly correlates with heightened tactile sensitivity requires further investigation.

The key point of this study is the use of a custom-designed behavioral paradigm and CNN model to decode high-dimensional LFP data. Traditional analyses of neural signals, such as amplitude-based methods or event-related potentials, often overlook subtle, higher-dimensional features that are embedded in the data (Yamins and DiCarlo, 2016a, 2016b; Saxena and Cunningham, 2019). By leveraging machine learning, we were able to extract these fine-grained temporal and spatial patterns, which revealed that texture-specific neural signals in S1 became more distinct when rats walked in the dark. These results highlight the power of advanced computational methods in uncovering previously undetectable shifts in sensory coding, paving the way for similar techniques to be applied to other forms of high-dimensional neural data, such as multi-electrode array recordings in freely behaving animals.

Previous studies on tactile enhancement in short-term visually deprived subjects have shown improvements in tactile acuity and corresponding plastic changes in fMRI BOLD signals (Pascual-Leone and Hamilton, 2001; Kauffman et al., 2002; Facchini and Aglioti, 2003). However, due to the low temporal and spatial resolution of fMRI, these studies could not capture detailed changes in tactile representation within the brain. In contrast, our study successfully captured population-level activity from S1 using LFP recordings. Although future research is needed to assess whether rats can more accurately discriminate between textures in the dark, our findings suggest an underlying mechanism driving tactile perception enhancement under short-term visual deprivation.

From an evolutionary standpoint, the enhanced tactile sensitivity observed in the dark could offer a significant advantage. Many species, including rodents, rely on somatosensation for navigation and foraging in low-visibility environments. The ability to quickly enhance tactile processing in such conditions could aid in efficient navigation, resource acquisition, and predator detection. Notably, changes in arousal levels correlate with tactile sensitivity (Shimaoka et al., 2018; Lee et al., 2020), and environmental lighting strongly influences arousal (Tamayo et al., 2023). This mechanism may thus underlie the computational shifts observed in S1, further emphasizing the dynamic nature of sensory processing. Rather than being a passive receptor of tactile input, S1 appears to adapt its encoding strategies depending on the sensory context, optimizing its function based on available visual or other sensory cues.

In addition to advancing our understanding of cross-modal processing, these findings have potential applications for sensory rehabilitation. Our results suggest that visual deprivation can rapidly enhance the brain’s processing of tactile stimuli, a principle that could be leveraged in individuals with tactile impairments to improve performance. This concept could be explored in the context of training protocols such as those used for Braille reading or haptic feedback devices, where manipulating sensory input might improve tactile sensitivity (Bolognini et al., 2009). Moreover, the use of machine learning techniques to decode neural signals holds promise for advancing brain–machine interfaces and improving assistive technologies for individuals with sensory deficits.

While this study contributes to our understanding of cross-modal processing, several limitations must be considered. Our results are correlational, and future research could incorporate causal manipulations, such as pharmacological interventions or optogenetic techniques, to better elucidate the underlying mechanisms driving these changes. Additionally, while rats are a powerful model system for studying sensory processing, their neural architecture and behaviors may not fully reflect the complexities of human perception. Further studies that explore the short-term effects of visual deprivation on sensory encoding, or extend this paradigm to include behavioral readouts of tactile acuity, would help clarify how changes in neural coding relate to perceptual performance.

In conclusion, this study provides novel insights into the brain’s capacity for cross-modal adaptation, showing that visual deprivation rapidly reorganizes tactile processing in S1. The enhanced distinction of tactile stimuli under conditions of visual deprivation suggests that S1 can flexibly adjust its neural representations in response to environmental changes. By combining behavioral paradigms, LFP recordings, and machine learning techniques, this work highlights the value of advanced computational approaches in understanding the dynamic and adaptive nature of sensory coding. These findings open the door for future research on sensory rehabilitation and multisensory interactions, and they underscore the brain’s remarkable ability to reorganize itself in response to changes in sensory input.

Acknowledgements

This work was supported by JST ERATO (JPMJER1801), AMED-CREST (24wm0625401h0001; 24wm0625502s0501; 24wm0625207s0101; 24gm1510002s0104), the Institute for AI and Beyond of the University of Tokyo, JSPS Grants-in-Aid for Scientific Research (18H05525, 20K15926, 22K21353), KOSÉ Cosmetology Research Foundation, the Public Foundation of Chubu Science and Technology Center, and Konica Minolta Science and Technology Foundation.

Additional information

Data availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Code availability

The codes that support the findings of this study are available from the corresponding author upon reasonable request.