(A) We hypothesise that multiple plasticity mechanisms could lead to the same neural network activities, i.e., there is degeneracy of plasticity mechanisms. (B) Adversarial learning of plasticity rules: empirical data are simulations of the postsynaptic activity of a rate network with plastic synapses evolving according to Oja’s rule. The generator G is a rate network with synapses evolving according to a tunable MLP rule. The discriminator D is a flexible network trained to distinguish empirical data from the generator output. In our framework, the generator and discriminator are trained so that at convergence, the learned MLP rule makes the generator produce postsynaptic neural activity traces indistinguishable from the empirical data.

Disparate plasticity rules with same postsynaptic activity.

(A) Investigated plasticity rule. (B) Postsynaptic activity traces of a rate network simulated with Oja’s rule (black) and local MLP rule (blue) for different initial synaptic weights (top). Learned-rule activities versus the original Oja’s rule activities at different time points and for different initial synaptic weights (bottom). (C) Weight trajectories, as measured by ||PC1ω || . Oja’s rule (top), local MLP (bottom). (D) Synaptic weight updates Δω for a range of presynaptic activities x and postsynaptic activities y and ω = 0.01. Oja’s rule (top), local MLP (bottom). (E) Vector field of ω versus postsynaptic activity y with presynaptic activity fixed at x = 0.5 (nullclines in black, fixed points in red). Oja’s rule (top), local MLP (bottom).

Learned rules have same generalisation properties as Oja’s rule.

(A) MLP rule on 3 presynaptic neurons and a noisy postsynaptic neuron: trained with GAN and tested on the same network (top, light brown), trained with a mean-squared error loss and tested on the same network (middle, brown), and trained with GAN and tested on a network with 3 presynaptic neurons and a noiseless postsynaptic neuron (bottom, dark brown). (B) MLP rule on 39 presynaptic neurons and a noiseless postsynaptic neuron: trained with GAN and tested on the same network (top, pink) and on a network with 3 presynaptic neurons and a noiseless postsynaptic neuron (bottom, red). (I) Trajectories of postsynaptic activity for various synaptic weight initialisations generated with GAN-learned MLP rules are qualitatively similar to those from Oja’s rule. (II) Activites from GAN-learned rule at different time points match the statistics of Oja’s rule for both held-out data from the training network and test network. (III) Weight trajectories for learned plasticity rules. Oja’s rule in black.

Different learned rules due to modeller bias.

(A) Parametrized plasticity rules. Oja+Local MLP (top), semi-global MLP (middle) and global MLP (bottom). (B) Learned-rule activities versus the original Oja’s rule activities at different time points and for different initial synaptic weights. (C) Weight trajectories, as measured by ||PC1ω || for Oja + local MLP (top, purple), semi-global MLP (middle, green), global MLP (bottom, yellow). (D) Synaptic weight updates Δω for a range of presynaptic activities x and postsynaptic activities y and ω = 0.01. (E) Vector field of ω versus postsynaptic activity y with presynaptic activity fixed at x = 0.5.