Receptive-field eligibility separation. (A) The network we used was single linear perceptron layer, with a single readout. (B) Tasks in this simulation were each defined by random, unit-normal, 100 dimensional input vectors and similarly distributed 20 dimensional target vectors. (C) Training was performed sequentially over 80 such tasks. When outputs were within 0.05 units of Euclidean distance of targets, training proceeded to the next task. (D) Networks computed surprise over inputs, which was used to determine task change-points and orthogonalize new input plasticity vectors against previous plasticity. (E) The surprise function used was a logistic curve over input cosine angle. (F) The optimal set of weights for the curriculum was computed, for comparison with network outputs, using the pseudo-inverse of the inputs. Intuitively, the solution is the intersection of individual task solutions, which themselves are rank-1 outer products between the (unit normal) inputs and targets, shown here as lines intersecting a unit sphere. (G) Error on each task, computed after training that task. (H) Backward transfer on each task, i.e. task errors at the end of curriculum training. (I) Initial task error on new tasks at each point in curriculum learning (forward transfer). The CEM shows negative transfer related to the fact that it remembers previous inputs, whereas this is reduced, but still present, for GD. (J) Layer weight norms in both models, over the course of learning. CEM weight norms grew over the course of learning to match the optimal network weight norm, given by the dashed red line, whereas GD does not. GD struggles to leave a region of weight space proximal to all individual task solutions, but not their intersection. (I) Distance from the optimal set of weights, indicating that not only is the weight norm of the CEM solution growing properly, the network is also converging to the optimum rather than diverging in an inappropriate direction. By contrast, GD gets further from the curriculum solution over time.