Summary plots showing the performance of the motoneuron pool model with respect to integration of inputs.

A1 - Shows a subset of 6 of the 20 MN in the pool given the same excitatory triangular input, the black trace above. The MN pool follows an orderly activation known as the Henneman’s size principle, showing that this MN model can transform a “common drive” into different MN activities (see section Common input structure, differences in motoneuron properties). A2 - Shows the models ability to favor S-type vs F-type motoneuron within the pool. The orange [ωstart, ωend] = [1, 2] Distribution favors the fast (F-type) motoneuron. The recruitment of theses F-type motoneurons (e.g. MN 11, MN 15 and MN 19) happens sooner and their maximum firing rates are higher with respect to their [ωstart, ωend] = [1, 1] counterparts. For the slow (S-type) motoneurons the recruitment is the same but their max firing rates decrease with respect to their [ωstart, ωend] = [1, 1] counterparts (see section Distribution of excitatory input to S vs F motoneurons). B - Shows our models ability to trigger PIC type behavior. For a neuromodulation level of 1.2 (max of range tested) and no inhibition, the motoneurons firing rates show a fast rise time, then attenuation, followed by sustained firing (see section Neuromodulation and excitatory input). C1 and C2 - Shows the model’s ability to modulate the effects of neuromodulation with inhibition. C1 shows a inhibition in the “Push-Pull” or reciprocal configuration. C2 shows inhibition in the balanced configurations (see section Neuromodulation and inhibition).

Reverse Engineering Paradigm.

The left side shows a schematic of the input pathway to the MN pool and its subsequent connection to a muscle. The right side shows a schematic of our goal. Can we predict the 3 inputs of the MN pool given the firing pattern of the MN pool used to generate the muscle output? In the push-pull scheme at upper right, inhibition (red) decrease as excitation (green) increases. In contrast, inhibition increases with excitation in the balanced scheme. In this work we show that using the “MN Pool Firing Pattern” one is able to reverse engineer back to the MN pool inputs.

Schematic of the algorithm used to approximate the excitatory input needed to produce the triangular output from the motoneuron pool. The schematic resembles a classic controller schematic. The algorithm starts at the “Input to MN Pool” box where the light pink triangular input is used as the initial guess to the “Plant System” box. Using this initial input guess, the MN pool model calculates it first output, seen in the “System Output” box (light pink trace). This initial output is compared the “Reference” Box and the resulting error e(t) is fed into the “Controller” box. Based on this error the controller calculates a new guess for an input and the cycle is repeated until a satisfactory error is achieved (MSE < 1 Hz2). The iterations are shown within the “Input to MN Pool” and “System Output” boxes along a left to right diagonal scheme to show the evolution of the traces produced by the algorithm. From iteration 0 to iteration n the input to the MN pool morphs to a highly non linear form to produce a linear output. Finally, this algorithm is repeated over a range of neuromodulation, inhibitory bias, inhibitory gain and exciatatory synaptic weights as shown in the “MN parameters” box. Taking all the iterations and combinations, the model was run at total of 6,300,000 times. See subsection Simulation protocol for details.

Top Row – shows the average and std excitatory input to the MN pool to produce the reference command (Figure 3 – reference input). Each column has a different neuromodulation value from lowest (0.8) to highest (1.2). For this example, the inhibitory gain is set to zero and the excitatory weight distribution is equal across the pool (111 configuration - see section Excitatory distribution or MN weights). In short, the lower the neuromodulation the more linear the input command becomes however, the higher the excitatory values becomes. Bottom Row - shows the average and std excitation (solid) and inhibition (dotted) for the two extremes of inhibitory schemes (Push-Pull in green and Balanced in orange). Neuromodulation is set to 1.0 (middle of range tried) and the excitatory weight distribution is equal across the pool (111 configuration - see section Excitatory distribution or MN weights). The “Push-Pull” inhibitory scheme requires less overall drive to achieve the same output.

Area under the curve for the excitatory inputs calculated by the algorithm (Figure 3 and Methods). Left - Shows the area under the curve of the excitatory input with respect to neuromodulation and inhibitory levels (color scheme). There inhibitory levels are clearly segregated across the neurmodulation range. The most efficient inhibitory scheme is “Push-Pull” (−0.7) and the least is “Balanced” (0.7). Right - shows the area under the curve of the excitatory input with respect to inhibiton for three neurmodulation level. The curves for each neuromodulation level do not intersect across the range of inhibition. The overall trend resembles the results on the left where the most efficent scheme is for high neurmodulation (1.2) and a “Push-Pull” inhibition (−0.7). These curves were calculated using an equally weighted excitatory input.

This figure links the firing rate shapes of the MN pool model to the inhibitory and neuromodulatory search space. Heat Map - The central figure is a heat map of the mean ΔF values of the MN pool model calculated using the firing rate patterns of the motoneurons in the MN pool model (see section Feature Extraction). These ΔF values are organized along the neuromodulation (x-axis) and inhibitory values (y-axis) used in the simulations. Firing Rate Plots - Emanating from the central figure are a subset of the firing rate plots of the MN pool model given a specific Inhibitory and Neuromodulation set (e.g. (0.2, 0.9)). These plots are linked to a ΔF value with an arrow. For clarity, only a subset of motoneuron firing rates are shown. From visual inspection, the shape of the firing rates differ given the Inhibition and Neuromodulation pair, from more non-linear (e.g. (−0.7, 1.2)) to more linear (e.g. (0.2, 0.9))

Features extracted from the firing rate patterns of each of the motoneurons in the pool. We extracted the following 7 features: Recruitment Time (trec), De-Recruitment Time (tdrec), Activation Duration (tdur), Recruitment Range (trange), Firing Rate Saturation (αsat), Firing Rate Hysteresis (ΔF) and brace–height (brace–height). These features are then used to estimate the neuromodulation, inhibition and excitatory weights of the motor pool (see section Machine learning inference of motor pool characteristics and Regression using motoneuron outputs to predict input organization). See subsection Feature Extraction for details on how to extract features.

Residual plots showing the goodness of fit of the different predicted values: (A) Inhibition, (B) Neuromodulation and (C) excitatory Weight Ratio. The summary plots are for the models showing highest R2 results in Table 1. The predicted values are calculated using the features extracted from the firing rates (see Figure 7, section Machine learning inference of motor pool characteristics and Regression using motoneuron outputs to predict input organization). Diagram (D) shows the multidimensionality of the RE models (see Model fits) which have 7 feature inputs (see Feature Extraction) predicting 3 outputs (Inhibition, Neuromodulation and Weight Ratio).

R2 values. Rows are the predicted values.

Columns are the models. The R2 values in bold represents the best model fit: Inhibition is best predicted using the RBF SVM model, Neuromodulation is best predicted by the Ridge model, Excitation Weight Ratio is best predicted by the RBF SVM model.

Shows the impact of the features on the models at predicting Inhibition, Neuromodulation and the excitatory Weight Ratios. The plots are split along two columns showing two types of complementary analysis: F-Statistic (left) and Mutual Information (right). Left - The left column shows the impact of the features on the MSE as ranked by their F value. The effect of the features are cumulative along the x-axis such that the value of the MSE at a given position in the x-axis is calculated using the features up to that point. Each of the colored traces represent one of the models used to predict the 3 outcomes and follows the trends found in Table 1. Right - The right column shows the mutual information of each feature with respect to the outcome: Inhibition, Neuromodulation and exciatory Weight Ratio (Top to Bottom). The features are ranked from best to worst along the x-axis. The ranking matches the MSE ranking in the right column. See section Ranking features to see details on how these techniques were implemented.

Heat maps for the different features extracted from the firing rates of the monotoneurons in the pool model. These heat maps are for the equally distributed excitatory weights [ωstart, ωend] = [1, 1]. Each heat map is organized along the Neuromodulation and Inhibitory ranges tried in this work. The values within these heat maps show structure with respect to Neuromodulation and Inhibition and show that this structure can be used to reverse engineer the Neuromodulation and Inhibition themselves.

Schematic of the parallel architecture used to solve the MN pool model and to find the solution for the input to that pool. Each neuromodulation, inhibition, excitatory input weight and noise seed combination was assigned a node on the computer via the slurm batch scheduler. These “hyper-parameters” were passed to each motoneuron which had a dedicated CPU core using the message passing interface (MPI). This work was done on the Bebop supercomputer at Argonne National Laboratory, Lemont, IL.