Behavioral paradigm and limb movement assessment with concurrent LFP recordings.

(A) A diagram of the disk-shaped treadmill used in the experiment. One half of the disk is covered with #80 sandpaper, and the other half with #2000 sandpaper. (B) A frame from the video capturing a walking rat from a left-side perspective. (C) The motivation scheme. The rats were water-deprived prior to the experiments. A water port was coupled with the movement of the treadmill so that when the rat walked on the treadmill, the water would come out. This way, the rats were always motivated to walk during the whole session. (D) The experimental protocol, where each rat walked for 10 minutes in light (50 lx) and then for 10 minutes in darkness (0 lx). (E) An example trajectory of the elbow and wrist joints from one session, plotted with the shoulder joint fixed in the coordinate space. (F) A schematic illustrating the forelimb and hindlimb subregions of S1. (G) A custom 32-channel electrode array used to record LFPs from these subregions.

Comparison of gait parameters across textures and environmental conditions.

(A) Swing phase vs. stance phase, illustrated with video frames (left: swing, right: stance). (B) Normalized swing duration measured for each rat under different textures (smooth vs. rough) and environmental condition (light vs. dark). There were no significant differences among trial conditions. (C–E) Stance duration, stride length, and footstrike speed, respectively, under the same conditions as in B. None of these parameters differed significantly across texture types or lighting conditions.

LFP recordings in rat S1 during walking.

(A) A single representative LFP trace aligned to a forelimb contact. (B) An example of an averaged LFP trace from one session, aligned to forelimb contact. (C) The electrode montage and averaged LFP at each electrode. Left: The electrode montage showing all 32 recording sites. Right: Averaged LFP signals aligned to forelimb contacts with the floor, shown for each electrode depicted in the left panel. (D) Comparison of amplitudes between hindlimb and forelimb subregions, aggregated across all 11 rats. P = 1.89×10-3, t10 = -2.2, paired t-test, n = 11 rats. (E) Comparison of averaged amplitudes within the same trial for different floor textures and environmental conditions. P > 0.05, one-way analysis of variance (ANOVA) followed by Tukey‒Kramer post hoc test, n = 149, 149, 107 and 107 trials for smooth-light, rough-light, smooth-dark, and rough-dark, respectively. (F) Cumulative probability distributions of mean amplitude from each session, compared across different textures. P = 4.84×10-1, 8.35×10-1, D = 1.03×10-1 and 7.54×10-2 for light and dark environments respectively, two-sample Kolmogorov‒ Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively. (G) Cumulative probability distributions of mean amplitude from each session, compared across light and dark environments. P = 1.05×10-1, 1.83×10-1, D = 0.14 and 0.149 for smooth and rough textures respectively, two-sample Kolmogorov‒Smirnov test, n = 149 and 107 trials from 11 rats for light and dark, respectively. Abbreviations: ERP, event-related potential; LFP, local field potential; S1, primary somatosensory cortex.

Model-based prediction of texture and environmental conditions from LFP.

(A) The deep learning model architecture. The LFP input is processed through two parallel pathways for macro- and micro-scale feature extraction, followed by residual blocks that feed into two output heads for floor texture (Smooth vs. Rough) and the environmental condition (Light vs. Dark). (B) Training performance for a single representative rat. The left graph shows accuracy curves for texture (blue) and lighting (yellow), and the right graph shows the loss curves. (C) Testing performance for the same rat. The model exhibits good generalization, as accuracy increases and loss decreases on held-out data. (D) Confusion matrix for texture classification for all rats. Values above chance on the diagonal indicate successful texture prediction. Note that all values in the same row add up to 1. (E) Same as D, but for environmental conditions. (F) Combined confusion matrix for texture and trial predictions. The model performs well on both tasks across all rats. Abbreviations: avgpool, average pooling layer; conv, convolutional layer; maxpool, max pooling layer; LFP, local field potential

Neural representations become distinct in dark environments than in light.

(A) A 912-dimensional feature vector is extracted from the layer preceding the final output. (B) A scatter plot of these features from one rat shows individual LFP segments (aligned to forelimb contact). Light orange and dark orange correspond to the light conditions (smooth, rough), while light blue and dark blue correspond to the dark conditions (smooth, rough). (C) Silhouette scores across all nine rats, showing that the dark condition yields higher scores and thus more distinct neural representations. P = 4.83×10-2, t8 = -2.6, paired t-test, n = 9 rats. (D) A pseudo-color map based on occlusion analysis, illustrating the contribution of each electrode in the forelimb and hindlimb subregions. Hotter regions indicate higher importance for the model’s predictions. (E) Forelimb channels exhibit higher occlusion sensitivity than hindlimb channels, highlighting the forelimb’s dominant role when the foot contacts the floor. P = 4.53×10- 6, t16 = -5.57, Student’s t-test, n = 9 rats. (F) Activation maps generated via integrated gradients highlight key input features responsible for accurate model predictions of texture and environmental conditions. Activation scores show each feature’s impact on the model’s output relative to a reference baseline: high positive scores denote features that strongly affect the predicted class. The onset of forelimb contact is aligned to time zero. (G) Activation scores averaged over forelimb electrodes for floor texture (left) and the environmental conditions (right). A temporal lag in the dark condition suggests an extended processing window for floor texture when visual cues are absent.