An internal timeline where stimulus (‘What’) representations change independently of temporal representations (‘When’), and vice versa.

Left: An internal timeline creating a record of the past. We hypothesize that the brain forms a conjunctive representation that places events — a “what” — on an internal timeline — a “when”. Such a conjunctive representation of what happened when should have properties, shown in the middle and right. Middle: The ‘What’ representations should behave independently from the elapsed time, or ‘When’. Consider we saw a set of stimuli, x, y, and z, at a delay of either τ = 1, 8 or 64 seconds. The representation vectors for stimuli x, y and z in our internal timeline might shrink or expand as we move into the past (τ gets bigger), but the relationships between them, here represented as the angle between each pair of vectors, should remain the same for all three durations (τ), so that their meaning (‘What’) remains independent of the elapsed time (‘When’). Right: The ‘When’ representations should behave independently of the stimuli which are experienced. We can represent different durations no matter what stimuli are used to define those intervals. For example, if three stimuli are presented in succession (x, y, z) such that the second one follows the first one by a delay of τ1 and the third one follows the second one by a delay of τ2, we are aware that the time interval between the first and third stimuli (τ3) is just the sum of the first two delays (τ3 = τ1 + τ2). This would hold true even if the stimuli (‘What’) were presented in a different order (y, z, x) - we might denote the time intervals differently (τ1, τ2, and τ3), but the relationship between them (‘When’) would be unchanged (τ3 = τ1 + τ2).

Low-dimensional dynamics in neural populations maintaining a stimulus representation show specific trends like stable subspaces and rotational dynamics, while timing information is maintained using two different temporal coding schemes.

Left: Stable subspaces exist simultaneously with diverse neural dynamics in Prefrontal Cortex (PFC) During Working Memory. While classical models posited stable, persistent firing for working memory, individual PFC neurons exhibit highly heterogeneous and dynamic activity during delay periods (a). However, population-level analysis can reveal stable representations. For instance, in a vibrotactile delayed discrimination task, where one has to keep track of a frequency, Principal Component Analysis (PCA) of PFC neuronal populations uncovers a stimulus-specific subspace (b). Neural trajectories within this subspace maintain a stable representation of the task variable in the shape of a line, with temporal variance relegated to an orthogonal subspace (perpendicular to the stimulus subspace), with different colored lines corresponding to different values of the stimulus frequency. Reproduced from Murray et al. (2017). Middle: Rotational Dynamics in Motor Cortex. In contrast to the stable representations in working memory, studies on motor control have revealed different dynamics. Churchland et al. (2012) observed rotational dynamics in neuronal populations of the motor cortex during reaching tasks, where trajectories in an informative subspace show an angular rotation over time (d). Such oscillatory dynamics have been proposed as a fundamental form of neural processing. However, more recent work suggests these rotational dynamics could also be a signature of consistent sequential neuronal activity in the analyzed data (c). Reproduced from Lebedev et al. (2019). Right: Two complementary forms of temporal coding in the prefrontal cortex (PFC). Temporal context cells (e, reproduced from Cao et al., 2024) exhibit persistent activity throughout a delay period, with their firing rate modulated by the passage of time. This suggests they maintain a stable representation of the current temporal context. Time cells (f, reproduced from Tiganj et al., 2018) on the other hand, show transient bursts of activity at specific, successive moments within a trial, effectively tiling the delay period with their firing.

© 2017, PNAS. Panel b was reproduced from Murray et al., 2017 with permission of PNAS. It is not covered by the CC-BY 4.0 licence and further reproduction of this panel would need permission from the copyright holder.

© 2024, Cao et al.. Panel e was reproduced from Cao et al., 2024 (published under a CC BY-ND license). Further reproductions must adhere to the terms of this license.

© 2018, Massachusetts Institute of Technology. Panel f was reproduced from Tiganj et al., 2018 with permission of MIT Press. All rights reserved. It is not covered by the CC-BY 4.0 licence and further reproduction of this panel would need permission from the copyright holder.

Conjunctive coding using Laplace Neural Manifolds.

LNM neurons have receptive fields (RFs) which are a product of a stimulus (what) and a temporal (when) term. What: The stimulus receptive fields tile the stimulus space, which is a ring in the case of ODR (Oculomotor Delayed Response, where an angular location θ has to be remembered), and a line in the case of VDD (Vibrotactile Delayed Discrimination task, where a frequency f has to be remembered). For the ODR task, neurons have bell-shaped tuning curves approximating a circular analogue of a normal distribution, preferentially encoding stimuli at different angles θi which evenly tile the ring(a). For the VDD task, the frequency range from 10 to 30 Hz is tiled by exponentially decaying and ramping cells with a spectrum of different ramp rates 1/fi. (b). When: Laplace cells (F) decay exponentially with different decay rates 1/τj, after the stimuli is presented, resembling temporal context cells (c), while inverse Laplace () cells fire sequentially, peaking at different times τj, resembling time cells (d). The receptive fields are chosen to evenly tile log time (d, e). When a stimuli is presented at a certain angle , the activity of the population as the time T after the presentation can be modeled on a cylinder (e). Laplace neurons (modeling temporal context cells) encode this history as a moving edge when seen in log time (f), while the inverse Laplace neurons encode it as a bump (g).

Neural trajectories from LNMs can show stable coding as well as temporal dynamics.

The figures show neural trajectories for ODR (Left) and VDD (right) tasks, computed with Laplace Neural Manifold populations simulating temporal context cells (Laplace, F, Top row, a and c) and time cells (inverse Laplace, , Bottom row, h and j) respectively. The neural activity is projected, via eigendecomposition of the covariance matrices, onto the first two stimulus principal components (Stim PC1 and PC2, x and y axes) and the first temporal principal component (Time PC1, z axes), up to a sign factor. Neural trajectories corresponding to different initial stimulus conditions, the angle θ for the ODR task, shown on the left (a and h), and the frequency f for the the VDD task (Right, c and j), are represented with different colors. The overall covariance matrix of the populations can be understood as a tensor product of covariance matrices across different ‘whats’ - stimulus representations for the ODR (d), and VDD tasks (g), and different ‘whens’ - temporal coding representations corresponding to the Laplace cells F (b), and the inverse Laplace cells (i). The corresponding primate PFC neural trajectories, reproduced from Murray et al. (2017), are shown for the ODR task (e) and VDD tasks (f) (Top), along with the stimulus spaces for both tasks (Bottom).

© 2017, PNAS. Parts of panel f are reproduced from Murray et al., 2017 with permission of PNAS. This content is not covered by the CC-BY 4.0 licence and further reproduction of this panel would need permission from the copyright holder.

The cumulative dimensionality of neural trajectories in LNM cells grows logarithmically with elapsed time.

Both Laplace and Inverse Laplace cells tile log time evenly. When calculating covariance from 0 to T, cells with time constants much larger than T either both fire maximally (Laplace cells, a) or are both silent (Inverse Laplace cells, b). For both choices of temporal basis functions, only cells which have time constants less than T co-vary meaningfully and contribute to the total covariance, and the number of such cells grows as log T. Explicitly calculating the rank of the covariance matrix of simulated Laplace and Inverse Laplace cells, for different values of T, shows this empirically (c, d), and gives us a measure of the dimensionality of their neural trajectories. This seems to follow the growth of cumulative dimensionality of actual neural data (e) collected during WM tasks, reproduced from Cueva et al. (2020).

© 2020, PNAS. Panel e is reproduced from Cueva et al., 2020 with permission of PNAS. It is not covered by the CC-BY 4.0 licence and further reproduction of this panel would need permission from the copyright holder.

Continuous Attractor Neural Networks (CANNs) can be used to construct Laplace Neural Manifolds of what × when information.

a. What: A ring attractor maintains a persistent representation of the presented stimulus by sustaining a bump activation at that angle. Other kinds of stimulus information could be maintained with appropriate circuits. b. When: A specialized set of line attractors can maintain an edge (Laplace) and a bump (Inverse Laplace) to implement temporal receptive fields evenly spaced over log time. Recurrent connections maintain a particular shape of activity across the network; an edge for Laplace temporal receptive fields and a bump for Inverse Laplace temporal receptive fields. Connections between layers move the edge/bump at an appropriate speed, resulting in temporal receptive fields as a function of log time. c. What × When: A series of ring attractors as in a, crossed with lines, as in b create a cylinder. The edge/bump attractor for time supplies global inhibition to the ring attractors coding for stimulus identity. d, e. Paired cylinders with backbones emulating the edge and bump respectively, can maintain the activity of Laplace and Inverse Laplace representations which shift as we progress along the remembered timeline τ akin to Fig. 3, together forming a Laplace Neural Manifold for time.

Covariance Matrices and Principal Components for Σwhat (top) and Σwhen (bottom), with the variance explained by the components shown in a log scale.

Top: The stimulus covariance matrix (Σwhat) shown for the ODR (a1, Left) and VDD (a2, Right) tasks, where the task variable lie on a ring and a line, respectively. The circular task variable is tiled with circular Gaussian (Von Mises) receptive fields with periodic boundary conditions, producing a periodic covariance matrix, which generates two sinusoidal principal components (a2) explaining the most variance (a3). The line variable is tiled with sets of exponentially decaying and ramping cells, which generates distinct quadrants in the covariance matrix (b1), with a smoothly-varying first principal component (b2) (with a discontinuity when switching from decaying to ramping cells) which explains the most variance (b3), and a mostly flat second component. Bottom: The time covariance matrix (Σwhen) shown for Laplace (temporal context cells, F) and Inverse Laplace (time cells, ) populations. With exponentially decaying cells, the covariance matrix for Laplace neurons (c1) has a smoothly decaying activation across the diagonal, which is the direction of maximum variance. The first principal component picks this up to show a monotonically-varying trend (c2) and also explains the most variance (c3). Time cells on the other hand are sequentially firing cells tiling a time-line, which shows up as a diagonally-concentrated covariance matrix which decays from bottom right to top left (d1). The biggest principal components (d2) of this covariance matrix also end up being oscillating sinusoids (similar to a2) but with a decaying envelope as we go from left to right (smaller to larger τj), and explain the most variance (d3). For both these temporal representations, the variance explained (set on a logarithmic scale) with number of principal components goes down smoothly, underscoring the high-dimensional nature of the covariance matrices (c3 and d3).