1. Computational and Systems Biology
Download icon

A decentralised neural model explaining optimal integration of navigational strategies in insects

  1. Xuelong Sun  Is a corresponding author
  2. Shigang Yue  Is a corresponding author
  3. Michael Mangan  Is a corresponding author
  1. Computational Intelligence Lab & L-CAS, School of Computer Science, University of Lincoln, United Kingdom
  2. Machine Life and Intelligence Research Centre, Guangzhou University, China
  3. Sheffield Robotics, Department of Computer Science, University of Sheffield, United Kingdom
Research Article
  • Cited 2
  • Views 782
  • Annotations
Cite this article as: eLife 2020;9:e54026 doi: 10.7554/eLife.54026

Abstract

Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.

Introduction

Central-place foraging insects navigate using a ‘toolkit’ of independent guidance systems (Wehner, 2009) of which the most fundamental are path integration (PI), whereby foragers track the distance and direction to their nest by integrating the series of directions and distances travelled (for reviews see Heinze et al., 2018; Collett, 2019), and visual memory (VM), whereby foragers derive a homing signal by comparing the difference between current and stored views (for reviews see Zeil, 2012; Collett et al., 2013). Neurophysiological and computational modelling studies advocate the central complex neuropil (CX) as the PI centre (Heinze and Homberg, 2007; Seelig and Jayaraman, 2015; Stone et al., 2017), whereas the mushroom body neuropils (MB) appear well suited to assessing visual valence as needed for VM (Heisenberg, 2003; Ardin et al., 2016; Müller et al., 2018). Yet, two key gaps in our understanding remain. Firstly, although current VM models based on the MB architecture can replicate route following (RF) behaviours whereby insects visually recognise the direction previously travelled at the same position (Ardin et al., 2016; Müller et al., 2018), they cannot account for visual homing (VH) behaviours whereby insects return directly to their familiar surroundings from novel locations following a displacement (e.g. after being blown off course by a gust of wind) (Wystrach et al., 2012). Secondly, despite increasing neuroanatomical evidence suggesting that premotor regions of the CX coordinate navigation behaviour (Pfeiffer and Homberg, 2014; Heinze and Pfeiffer, 2018; Honkanen et al., 2019), a theoretical hypothesis explaining how this is achieved by the neural circuitry has yet to be developed. In this work, we present a unified neural navigation model that extends the core guidance modules from two (PI and VM) to three (PI, RF, and VH) and by integrating their outputs optimally using a biologically realistic ring attractor network in the CX produces realistic homing behaviours.

The foremost challenge in realising this goal is to ensure that the core guidance subsystems provide sufficient directional information across conditions. Contemporary VM models based on the MBs can replicate realistic RF behaviours in complex visual environments (ant environments: Kodzhabashev and Mangan, 2015; Ardin et al., 2016, bee environments: Müller et al., 2018) but do not generalise to visual homing scenarios whereby the animal must return directly to familiar terrain from novel locations (ants: Narendra, 2007, bees: Cartwright and Collett, 1982, wasps: Stürzl et al., 2016). Storing multiple nest-facing views before foraging, inspired by observed learning walks in ants (Müller and Wehner, 2010; Fleischmann et al., 2016) and flights in bees and wasps (Zeil et al., 1996; Zeil and Fleischmann, 2019), provides a potential solution (Graham et al., 2010; Wystrach et al., 2013), but simulation studies have found this approach to be brittle due to high probabilities of aligning with the wrong memory causing catastrophic errors (Dewar et al., 2014). Moreover, ants released perpendicularly to their familiar route do not generally align with their familiar visual direction as predicted by the above algorithms (Wystrach et al., 2012), but instead move directly back towards the route (Fukushi and Wehner, 2004; Kohler and Wehner, 2005; Narendra, 2007; Mangan and Webb, 2012; Wystrach et al., 2012), which would require a multi-stage mental alignment of views for current models. New computational hypothesis are thus required that can guide insects directly back to their route (often moving perpendicularly to the habitual path), but also allow for the route direction to be recovered (now aligned with the habitual path) upon arrival at familiar surroundings (see Figure 1A ‘Zero Vector’).

Overview of the unified navigation model and it’s homing capabilities.

(A) The homing behaviours to be produced by the model when displaced either from the nest and having no remaining PI home vector (zero vector), or from the nest with a full home vector (full vector). Distinct elemental behaviours are distinguished by coloured path segments, and stripped bands indicate periods where behavioural data suggests that multiple strategies are combined. Note that this colour coding of behaviour is maintained throughout the remaining figures to help the reader map function to brain region. (B) The proposed conceptual model of the insect navigation toolkit from sensory input to motor output. Three elemental guidance systems are modelled in this paper: path integration (PI), visual homing (VH) and route following (RF). Their outputs must then be coordinated in an optimal manner appropriate to the context before finally outputting steering command. (C) The unified navigation model maps the elemental guidance systems to distinct processing pathways: RF: OL - > AOTU - > BU - > CX; VH: OL - > MB - > SMP - > CX; PI: OL - > AOTU - > BU - > CX. The outputs are then optimally integrated in the proposed ring attractor networks of the FB in CX to generate a single motor steering command. Connections are shown only for the left brain hemisphere for ease of visualisation but in practice are mirrored on both hemispheres. Hypothesised or assumed pathways are indicated by dashed lines whereas neuroanatomically supported pathways are shown by solid lines (a convention maintained throughout all figures). OL: optic lobe, AOTU: anterior optic tubercle, CX: central complex, PB: protocerebrum bridge, FB: fan-shape body (or CBU: central body upper), EB: ellipsoid body (or CBL: central body lower), MB: mushroom body, SMP: superior medial protocerebrum, BU: bulb. Images of the brain regions are adapted from the insect brain database https://www.insectbraindb.org.

With the necessary elemental guidance systems defined, a unifying model must then convert the various directional recommendations into a single motor command appropriate to the context (Cruse and Wehner, 2011; Hoinville et al., 2012; Collett et al., 2013; Webb, 2019). Behavioural studies show that when in unfamiliar visual surroundings (‘Off-Route’) insects combine the outputs of their PI and VH systems (Collett, 1996; Bregy et al., 2008; Collett, 2012) relative to their respective certainties consistent with optimal integration theory (Legge et al., 2014; Wystrach et al., 2015Figure 1A ‘Full Vector’). Upon encountering their familiar route, insects readily recognise their surroundings, recover their previous bearing and retrace their familiar path home (Harrison et al., 1989; Kohler and Wehner, 2005; Wystrach et al., 2011; Mangan and Webb, 2012). Thus, the navigation coordination model must posses two capabilities: (a) output a directional signal consistent with the optimal integration of PI and VH when Off-Route (b) switch from Off-Route (PI and VH) to On-Route (RF) strategies when familiar terrain is encountered. Mathematical models have been developed that reproduce aspects of cue integration in specific scenarios (Cruse and Wehner, 2011; Hoinville and Wehner, 2018), but to date no neurobiologically constrained network revealing how insects might realise these capabilities has been developed.

To address these questions a functional modelling approach is followed that extends the current base model described by Webb, 2019 to (a) account for the ability of ants to home from novel locations back to the familiar route before retracing their familiar path the rest of the journey home, and (b) propose a neurally based model of the central complex neuropil that integrates competing cues optimally and generates a simple steering command that can drive behaviour directly. Performance is bench-marked by direct comparison to behavioural data reported by Wystrach et al., 2012 (showing different navigation behaviours on and off the route), Legge et al., 2014; Wystrach et al., 2015 (demonstrating optimal integration of PI and VM), and through qualitative comparison to extended homing paths where insects switch between strategies according to the context (Narendra, 2007). Biological realism is enforced by constraining models to the known anatomy of specific brain areas, but where no data exists an exploratory approach is taken to investigate the mechanisms that insects may exploit. Figure 1A depicts the adaptive behaviours observed in animals that we wish to replicate accompanied by a functional overview of our unified model of insect navigation (Figure 1B) mapped to specific neural sites (Figure 1C).

Results

Mushroom bodies as drivers of rotational invariant visual homing

For ants to return directly to their familiar route after a sideways displacement (Figure 1A 'Zero Vector') without continuous mental or physical realignment they require access to rotational invariant visual cues. Stone et al., 2018 recently demonstrated that binary images of panoramic skylines converted into their frequency components can provide such a rotationally-invariant encoding of scenes in a compact form (see Image processing for an introduction to frequency transformations of images). Moreover, they demonstrated that the difference between the rotationally invariant features (the amplitudes of the frequency coefficients) between two locations increases monotonically with distance producing an error surface reminiscent of the image difference surfaces reported by Zeil et al., 2003 which can guide an agent back to familiar terrain. Here we investigate whether the MB neuropils shown capable of assessing the visual valence of learned rotationally-varying panoramic skylines for RF (Ardin et al., 2016; Müller et al., 2018), might instead assess the visual valence of rotationally-invariant properties of views sampled along a familiar route supporting visual homing.

To this end, the intensity sensitive input neurons of Ardin et al., 2016’s MB model are replaced with input neurons encoding rotational invariant amplitudes (Figure 2A left, blue panel). The network is trained along an 11m curved route in a simulated world that mimics the training regime of ants in Wystrach et al., 2012 (see Materials and methods and Reproduce visual navigation behaviour for details on simulated world, image processing, model architecture and training and test regime). After training, the firing rate of the MB output neuron (MBON) when placed at locations across the environment at random orientations reveals a gradient that increases monotonically with distance from the familiar route area, providing a homing signal sufficient for VH independent of the animal’s orientation (Figure 2C).

Visual homing in the insect brain.

(A) Neural model of visual homing. Rotational-invariant amplitudes are input to the MB calyx which are then projected to the Kenyon cells (KCs) before convergence onto the MB output neuron (MBON) which seeks to memorise the presented data via reinforcement-learning-based plasticity (for more details see Visual homing) (MB circuit: left panels). SMP neurons measure positive increases in visual novelty (through input from the MBON) which causes a shift between the current heading (green cells) and desired headings (red cells) in the rings of the CX (SMP pathway between MB and CX: centre panel; CX circuit: right panels). The CX-based steering circuit then computes the relevant turning angle. Example activity profiles are shown for an increase in visual novelty, causing a shift in desired heading and a command to change direction. Each model component in all figures is labelled with a shaded star to indicate what aspects are new versus those incorporated from previous models (see legend in upper left). (B) Schematic of the steering circuit function. First the summed differences between the impact of 45 °left and right turns on the desired heading and the current heading are computed. By comparing the difference between the resultant activity profiles allows an appropriate steering command to be generated. (C) Schematic of the visual homing model. When visual novelty drops (t-2 to t-1) the desired heading is an unshifted copy of the current heading so the current path is maintained but when the visual novelty increases (t-1 to t) the desired heading is shifted from the current heading. (D) The firing rate of the MBON sampled across locations at random orientations is depicted by the heat-map showing a clear gradient leading back to the route. The grey curve shows the habitual route along which ants were trained. RP (release point) indicates the position where real ants in Wystrach et al., 2012 were released after capture at the nest (thus zero-vector) and from which simulations were started. The ability of the VH model to generate realistic homing data is shown by the initial paths of simulated ants which closely match those of real ants (see inserted polar plot showing the mean direction and 95% confidential interval), and also the extended exampled path shown (red line). Note that once the agent arrives in the vicinity of the route, it appears to meander due the flattening of visual novelty gradient and the lack of directional information.

Motor output is then generated by connecting the MBON to a steering network recently located in the fan-shaped body (FB/CBU) of the CX that functions by minimising the difference between the animal’s current and desired headings (Stone et al., 2017). Stone et al., 2017’s key insight was that the anatomically observed shifts of activity in the columnar neurons that encode the desired heading in essence simulate 45° turns left and right, and thus by comparing the summed differences between the activity profiles of these predicted headings to the current heading then the appropriate turning command can be computed (see Figure 2B). We adopt this circuit as the basis for computing steering commands for all strategies as suggested by Honkanen et al., 2019.

In the proposed VH model the current heading input to the steering circuit uses the same celestial global compass used in Stone et al., 2017’s PI model. Insects track their orientation through head-direction cells Seelig and Jayaraman, 2015 whose concurrent firing pattern forms a single bump of activity that shifts around the ring as the animal turns (measured through local visual [Green et al., 2017; Turner-Evans et al., 2017], global visual (Heinze and Homberg, 2007) and proprioceptive (Seelig and Jayaraman, 2015) cues). Neuroanatomical data (Kim et al., 2017; Turner-Evans et al., 2019; Pisokas et al., 2019) supports theoretical predictions (Cope et al., 2017; Kakaria and de Bivort, 2017) that the head-direction system of insects follows a ring attractor (RA) connectivity pattern characterised by local excitatory interconnections between direction selective neurons and global inhibition. In this work, the global compass RA network is not modelled directly but rather we simulate its sinusoidal activity profile in a ring of I-TB1 (locusts and Δ7 of flies) neurons found in the protocerebral bridge (PCB/PB) (Figure 2A green ring) (see Current headings).

A desired heading is then generated by copying the current activity pattern of the global compass neurons to a new neural ring which we speculate could reside in either a distinct subset of I-TB1 neurons (Beetz et al., 2015) or in the FB. Crucially, the copied activity profile also undergoes a leftward shift proportional to any increase in visual novelty (a similar shifting mechanisms has been proposed for the head-direction system [Green et al., 2017; Turner-Evans et al., 2017]) which we propose is measured by neurons in the superior medial protocerebrum (SMP) (Aso et al., 2014; Plath et al., 2017) (see Figure 2A centre and activity of red rings). The result is a mechanism that recommends changing direction when the agent moves away from familiar terrain (visual novelty increases) but recommends little change to the current heading when the visual novelty is decreasing (see Figure 2C for a schematic of the VH mechanism). We note that there is a distinction between a ring network which describes a group of neurons whose pattern of activity forms a circular representation regardless of actual physical arrangement and RA networks which follow a specific connectivity pattern (all modelled RAs labelled in figures). Taken together the model iteratively refines it’s orientation to descend the visual novelty gradient and thus recover familiar terrain (see Figure 2A for full model).

Figure 2D demonstrates that the proposed network accurately replicates both the directed initial paths as in Wystrach et al., 2012 (see the inserted black arrow), and extended homing paths as in Narendra, 2007 observed in ants displaced to novel locations perpendicular to their familiar routes. We note that upon encountering the route the model is unable to distinguish the direction in which to travel and thus meanders back and forth along the familiarity valley, unlike real ants, demonstrating the need for additional route recognition and recovery capabilities.

Optimally integrating visual homing and path integration

We have demonstrated how ants could use visual cues to return to the route in the absence of PI but in most natural scenarios (e.g. displacement by a gust of wind) ants will retain a home vector readout offering an alternative, and often conflicting, guidance cue to that provided by VH. In such scenarios, desert ants strike a comprise by integrating their PI and VH outputs in a manner consistent with optimal integration theory by weighting VH relative to the familiarity of the current view (Legge et al., 2014) and PI relative to the home vector length (a proxy for directional certainty) (Wystrach et al., 2015).

Various ring-like structures of the CX represent directional cues as bumps of activity with the peak defining the specific target direction, and the spread providing a mechanism to encode cue certainty as required for optimal integration (for an example see increased spread of HD cell activity when only proprioceptive cues are present [Seelig and Jayaraman, 2015]). Besides their excellent properties to encode the animal’s heading ring attractors also provide a biologically realistic means to optimally weight cues represented in this format (Touretzky, 2005; Mangan and Yue, 2018) without the need for dedicated memory circuits to store means and uncertainties of each cue.

Thus we introduce a pair of integrating ring-attractor networks to the CX model (Figure 3A grey neural rings: RA_L and RA_R) that take as input the desired headings from the above proposed VH model (red neural rings: VH_L and VH_R) and Stone et al., 2017’s PI model (orange neural rings: PI_L and PI_R) and output combined Off Route desired heading signals that are sent to the steering circuits (blue neural rings: CPU_L and CPU_R). Stone et al., 2017 mapped the home vector computation to a population of neurons (CPU4) owing to their dual inputs from direction selective compass neurons (I_TB1) and motion-sensitive speed neurons (TN2) as well as their recurrent connectivity patterns facilitating accumulation of activity as the animal moves in a given direction. Wystrach et al., 2015 showed that the certainty of PI automatically scales with the home-vector length owing to the accumulating effect of the memory neurons which correlates with directional uncertainty, and thus the output PI network is directly input to the ring attractor circuits. In our implementation the VH input has a fixed height and width profile and influences the integration through tuning neurons (TUN) (see the plotted activation function in Figure 3B and Optimal cue integration) that we suggest reside in the SMP and modulate the PI input to the integration network. Altering the weighting in this manner rather than by scaling the VH input independently allows VH to dominate the integrated output at sites with high visual familiarity even in the presence of a large home vector without having large stored activity. We note, however, that both approaches remain feasible and further neuroanatomical data is required to clarify which, if either, mechanism is employed by insects.

Figure 3 with 1 supplement see all
Optimal cue integration in the CX.

(A) Proposed model for optimally integrating PI and VH guidance systems. In each hemisphere, ring attractors (RAs) (grey neural rings) (speculatively located in FB/CBU) receive the corresponding inputs from PI (orange neural rings) and VH (red neural rings) with the outputs sent to the corresponding steering circuits (blue neural rings). Integration is weighted by the visual novelty tracking tuning neuron (TUN) whose activation function is shown in the leftmost panel. (B) Examples of optimal integration of PI and VH headings for two PI states with the peak stable state (grey dotted activity profile in the integration neurons) shifting towards VH as the home vector length recedes. (C) Replication of optimal integration studies of Wystrach et al., 2015 and Legge et al., 2014. Simulated ants are captured at various points (0.1 m, 1 m, 3 m and 7 m) along their familiar route (grey curve) and released at release point 1 (RP1) thus with the same visual certainty but with different PI certainties as in Wystrach et al., 2015 (see thick orange arrow). The left polar plot shows the initial headings of simulated ants increasingly weight their PI system (270°) in favour of their VH system (135°) as the home vector length increases and PI directional uncertainty drops. Simulated ants are also transferred from a single point 1 m along their familiar route to ever distant release points (RP1, RP2, RP3) thus with the same PI certainty but increasingly visual uncertainty as in Legge et al., 2014 (see thick red arrow). The right polar plot shows the initial headings of simulated ants increasingly weight PI (270°) over VH (135°) as visual certainty drops. (see Reproduce the optimal cue integration behaviour for details) (D) Example homing paths of the independent and combined guidance systems displaced from the familiar route (grey) to a fictive release point (RP).

Figure 3C shows the initial headings produced by the model which replicates the trends reported in cue-conflict experiments by Legge et al., 2014 and Wystrach et al., 2015 when the uncertainty of PI and VH cues were altered independently. Example extended paths of independent PI and VH models and the ring-attractor-based combined PI and VH model are plotted in Figure 3D with the combined model showing the most ant-like behaviour (Kohler and Wehner, 2005; Mangan and Webb, 2012) by initially following predominantly the home-vector direction before switching to visual homing when the home-vector length drops leading the simulated ant back to familiar terrain. Note that the PI-only and PI+VH models are drawn back towards their fictive nest sites indicated by their home vectors which if left to run would likely result in emergent search-like patterns as in Stone et al., 2017. Moreover, upon encountering the route the VH-based models (VH-only and PI+VH) are unable to distinguish the direction in which to travel and hence again (see meander around the valley of familiarity Figure 2D and Figure 3D) further demonstrating a need for a route recovery mechanism.

Route following in the insect brain

The model described above can guide insects back to their familiar route area, but lacks the means to recover the route direction upon arrival as observed in homing insects. This is not surprisingly as VH relies upon translationally-varying but rotational-invariant information whereas RF requires rotationally varying cues. Thus we introduce a new elemental guidance system that makes use of the rotationally-varying phase coefficients of the frequency information derived from the panoramic skyline which tracks the orientation of specific features of the visual surroundings (see Materials and methods). Here, we ask whether by associating the rotationally invariant amplitudes (shown useful for place recognition) with the rotationally-varying phases experienced at those locations, insects might recover the familiar route direction.

Neuroanatomical data with which to constrain a model remains sparse and therefore a standard artificial neural network (ANN) architecture is used to investigate the utility of phase-based route recovery with biological plausibility discussed in more detail below. A three-layer ANN was trained to associate the same 81 rotational-invariant amplitudes as used in the VH model with the rotational varying phase value of single frequency coefficient experienced when travelling along the habitual route which we encode in an eight neuron-ring (see Figure 4A and Route Following for detailed model description). Thus, when the route is revisited the network should output the orientation that the phase converged upon when at the same location previously, which we note is not necessarily aligned with the actual heading of the animal (e.g. it may track the orientation to vertical bar [Seelig and Jayaraman, 2015]). Realignment is possible using the same steering mechanism as described above but which seeks to reduce the offset between the current phase readout (e.g. a local compass locked onto visual features of the animals surroundings), and the recalled phase readout from the ANN.

Phase-based route following.

(A) Neural model. The visual pathway from the optic lobe via AOTU and Bulb to EB of the CX is modelled by a fully connected artificial neural network (ANN) with one hidden layer. The input layer receives the amplitudes of the frequency encoded views (as for the MB network) and the output layer is an 8-neuron ring whose population encoding represents the desired heading against to which the agent should align. (B) Behaviours. Blue and red arrows in the inserted polar plot (top left) display the mean directions and 95% confidential intervals of the initial headings of real (Wystrach et al., 2012) and simulated ants released at the start of the route (-7,-7), respectively. Dark blue curves show the routes followed by the model when released at five locations close to the start of the learned path. The overlaid fan-plots indicate the circular statistics (the mean direction and 95% confidential interval) of the homing directions recommended by the model when sampled across heading directions (20 samples at 18°intervals). Data for entire rotations are shown on the right for specific locations with the upper plot, sampled at (1.5,-3), demonstrating accurate phase-based tracking of orientation, whereas the lower plot sampled at (-2.5,-3.5) shows poor tracking performance and hence produces a wide fan-plot.

We speculate that the most likely neural pathways for the new desired and current headings are from Optic Lobe via Anterior Optic Tubercle (AOTU) and Bulb (BU) to EB (CBL) of the CX (Homberg et al., 2003; Omoto et al., 2017) (see Figure 4A) with the desired heading terminating in the EB, whereas the current heading continues to the PB forming a local compass that sits beside the global compass used by PI and VH systems. This hypothesis is further supported by the recently identified parallel pathways from OL via AOTU to the CX in Drosophila (Timaeus et al., 2020). That’s to say that, firstly, there are two parallel pathways forming two compass systems- the global (here based on celestial cues) and the local (based on terrestrial cues) compasses modelled by the activation of I-TB1 and II-TB1 neurons, respectively. Four classes of CL1 neurons (or E-PG and P-EG neurons) Heinze and Homberg, 2009; Xu et al., 2020 and three classes of independent TB1 neurons Beetz et al., 2015 have been identified that provide potential sites for the parallel recurrent loops encoding independent local and global compasses. Secondly, the desired heading, which is the recalled phase of a specific view, is generated through the neural plasticity from AOTU to BU and BU to EB, which is line with recent evidence of associative learning between the R-neurons transmitting visual information from BU to EB and the compass neurons (CL1a or E-PG neurons) that receive input from EB (Kim et al., 2019; Fisher et al., 2019). This kind of learning endows the animal with the ability to flexibly adapt their local compass and also desired navigational orientation according to the changing visual surroundings. Hanesch et al., 1989 reported a direct pathway from EB to FB neurons which we model to allow comparison of the local compass activity (II-TB1) with the desired heading. However, we note that this connectivity has not been replicated in recent studies Heinze and Homberg, 2008 and thus further investigation of potential pathways is required.

The RF model accurately recovers the initial route heading in a similar manner to real ants returned to the start of their familiar route (Wystrach et al., 2012Figure 4B, insert), and then follows the remaining route in its entirety back to the nest again reflecting ant data (Kohler and Wehner, 2005; Mangan and Webb, 2012Figure 4B). The quiver plots displayed in the background of Figure 4B show the preferred homing direction output by the ANN when rotated on the spot across locations in the environment. The noise in the results are due to errors in the tracking performance (see examples Figure 4B right) yet as these errors are in largely confined to the magnitude, the steering circuit still drives the ant along the route. We note that this effect is primarily a function of the specific frequency transformation algorithm used which we borrow from computer graphics to investigate the utility of frequency encoding of visual information. The biological realism of such transforms and their potential implementation in the insect vision system are addressed in the Discussion. The displaced routes also highlight the danger of employing solely RF which often shadows rather than converges with the route when displaced sideways, further demonstrating the necessity for integration with the Off-Route strategies that promote route convergence.

Route recovery through context-dependent modulation of guidance systems

Homing insects readily recognise familiar route surroundings, recover their bearing, and retrace their habitual path home, irrespective of the status of other guidance system such as PI. Replicating such context-dependent behavioural switching under realistic conditions is the final task for the proposed model. The visual novelty measured by the MBON provides an ideal signal for context switching with low output when close to the route when RF should dominate versus high output further away from the route when PI and VH should be engaged (see Figure 2D). Also the fact that Off-route strategies (PI and VH) compute their turning angles with reference to the global compass whereas the On-route RF strategy is driven with reference to a local compass provides a means to modulate their inputs to the steering circuit independently. This is realised through a non-linear weighting of the On and Off-route strategies which we propose acts through the same SMP pathway as the VH model (see the SN1 and SN2 neurons in Figure 5A) (see Context-dependent switch for neuron details and Figure 6 for a force-directed graph representation of the final unified model).

Unified model realising the full array of coordinated navigational behaviours.

(A) Context-dependent switching is realised using two switching neurons (SN1, SN2) that have mutually exclusive firing states (one active while the other is in active) allowing coordination between On and Off-Route strategies driven by the instantaneous visual novelty output by the MB. Connectivity and activation functions of the SMP neurons are shown in the left side of panel. (B) Activation history of the SN1, SN2 and TUN (to demonstrate the instantaneous visual novelty readout of the MB) neurons during the simulated displacement trials. (C) Paths generated by the unified model under control of the context-dependent switch circuit during simulated FV (solid line) and ZV (dashed line) displacement trials.

The detailed neural connections of the proposed model.

(A): The detailed neural connections of the navigation coordination system. (B): The neural connection of the route following network. The input layer to the hidden layer is fully connected, so does the hidden layer to the output layer. (C): The network generating the visual homing memory. (D): The detailed neural connection of the ring attractor network for optimal cue integration.

The activity of the proposed switching circuit and the paths that it generates in simulated zero vector and full vector displacement trials are shown in Figure 5B and C respectively. In the full vector trial (Figure 5B (upper), Figure 5C (solid line)) as visual novelty is initially high (see high TUN activity until step 78) SN2 is activated which enables Off-Route strategies (PI and VH) while SN1 (always the inverse of SN2) is deactivated which disables On-Route strategies. Note that it is the integration of PI and VH that generates the direct path back to the route area in the FV trial: PI recommends moving at a 45° bearing but VH prevents ascension of the visual novelty gradient that this would cause with the compromise being a bearing closer to 90° that is toward the route. As the route is approached the visual novelty decreases (again see TUN activity), until at step 78 SN2 falls below threshold and deactivates the Off-Route strategies while conversely SN1 activates and engages On-Route strategies. After some initial flip-flopping while the agents converges on the route (steps 78–85) RF becomes dominant and drives the agent back to the nest via the familiar path. In the zero vector trial (Figure 5B (lower), (Figure 5B (dashed line)) Off-route strategies (here only VH) largely dominate (some false positive route recognition (e.g step 60)) until the route is recovered (step 93), at which point the same flip-flopping during route convergence occurs (steps 93–96) followed by RF alone which returns the agent to the nest via the familiar path. It should be noted that the data presented utilised different activation functions of the TUN neuron that weights PI and VH (see Table 1 for parameter settings across trials and Discussion for insights into model limitations and potential extensions), yet the results presented nevertheless provide a proof-of-principle demonstration that the proposed unified navigation model can fulfil all of the criteria defined for replication of key adaptive behaviour observed in insects (Figure 1A).

Table 1
The detailed parameters settings for the simulations.
Para.Visual homingOptimal integration
tuning PI
Optimal integration
tuning VH
Route followingWhole model ZVWhole model FV
ThrKC(Equation 14)0.040.040.040.040.040.04
ηKC2MBON(Equation 16)0.10.10.10.10.10.1
kVH(Equation 19)2.02.02.0/0.50.5
kTUN (Equation 28)/0.10.1/0.0250.0125
ThrSN2(Equation 32)////2.03.0
kmotor(Equation 35)0.1250.1250.1250.1250.3750.375
SL (cm/step) (Equation 39)444488
initial heading (deg)0~3600~3600~3600 / 180900

Discussion

This work addresses two gaps in the current understanding of insect navigation: what are the core visual guidance systems required by the insect navigational toolkit? And how are they coordinated by the insect brain?

We propose that the insect navigation toolkit (Wehner, 2009; Webb, 2019) should be extended to include independent visual homing (VH) and route following (RF) systems (see Figure 1B for updated Insect Navigation Toolkit). We show how VH and RF can be realised using frequency-encoding of panoramic skylines to separate information into rotationally invariant amplitudes for VH and rotationally varying phases for RF. The current model utilises frequency encoding schema from the computer graphics but behavioural studies support the use of spatial frequency by bees (Horridge, 1997; Lehrer, 1999), with neurons in the lobula of dragonflies (O'Carroll, 1993) and locusts James and Osorio, 1996 found to have receptive fields akin to basis functions, providing a mechanism by which to extract the frequency information necessary for the local compass system. Our model allows for this information extraction process to happen at multiple stages ahead of its usage in the central learning sites such as the MBs opening the possibility for its application in either the optic lobes or subsequent pathways through regions such as the AOTU. Further, neurophysiological data is required to pinpoint both the mechanisms and sites of this data processing in insects. Similarly, following Stone et al., 2017 the global compass signal directly mimics the firing pattern of compass neurons in the CX without reference to sensory input but Gkanias et al., 2019 recently presented a plausible neural model of the celestial compass processing pipeline that could be easily integrated into the current model to fill this gap. Follow-on neuroanatomically constrained modelling of the optic lobes presents the most obvious extension of this work allowing the neural pathway from sensory input to motor output signal to be mapped in detail. Conversely, modelling the conversion of direction signals into behaviour via motor generating mechanisms such as central pattern generators (see Steinbeck et al., 2020) will then allow closure of the sensory-motor loop.

Visual homing is modelled on neural circuits found along the OL-MB-SMP pathway (Ehmer and Gronenberg, 2002; Gronenberg and López-Riquelme, 2004) before terminating in the CX steering circuit (Stone et al., 2017) and shown capable of producing realistic homing paths. In this schema, the MBs do not measure rotationally varying sensory valence as recently used to replicate RF (Ardin et al., 2016; Müller et al., 2018), but rather the spatially varying (but rotationally invariant) sensory valence more suited to gradient descent strategies such as visual homing (Zeil et al., 2003; Stone et al., 2018) and other taxis behaviours (Wystrach et al., 2016). This is inline with the hypothesis forwarded by Collett and Collett, 2018 that suggest that the MBs output ‘whether’ the current sensory stimulus is positive or negative and the CX then adapts the animal heading, the ‘whither’, accordingly.

Route following is shown possible by learned associations between the amplitudes (i.e. the place) and the phase (the orientation) experienced along a route, allowing realignment when later at a proximal location. This kind of neural plasticity-based correlation between the visual surroundings and the orientations fits with data recently observed in fruit flies (Kim et al., 2019; Fisher et al., 2019). These studies provide the neural explanation for the animal’s ability to make flexible use of visual information to navigate while the proposed model gives a detailed implementation of such ability in the context of insect’s route following schema. Neurophysiological evidence suggests that the layered visual pathway from OL via AOTU and BU to the EB of the CX (Barth and Heisenberg, 1997; Homberg et al., 2003; Omoto et al., 2017) with its suggested neural plasticity properties (Barth and Heisenberg, 1997; Yilmaz et al., 2019) provides a possible neural pathway but further analysis is needed to identify the circuit structures that might underpin the generation of RF desired heading. In addition to the desired heading, the current heading of RF is derived from the local compass system anchored to animal’s immediate visual surroundings. This independent compass system may be realised parallel to the global compass system in an similar but independent circuit (Heinze and Homberg, 2009; Beetz et al., 2015; Xu et al., 2020). Our model therefore hypothesises that insects possess different compass systems based on varied sensory information and further that insects possess the capability (via CX-based RAs) to coordinate their influence optimally according to the current context. Since the global compass, the local compass and the desired heading of RF share the same visual pathway (OL->AOTU->BU->CX), distinct input and output patterns along this pathway may be found by future neuroanatomical studies. In addition, in the proposed model, the activation of current heading and desired heading of RF overlap in the EB, and therefore separation of activation profiles representing each output (e.g. following methods in Seelig and Jayaraman, 2015) presents another meaningful topic for future neurophysiological research.

Closed-loop behavioural studies during which the spatial frequency information of views is altered (similar to Paulk et al., 2015) coincident with imaging of key brain areas (Seelig and Jayaraman, 2013) offers a means to investigate which neural structures make use of what visual information. Complimentary behavioural experiments could verify the distinct VH and RF systems by selectively blocking the proposed neural pathways with impacts on behaviour predicted by Figure 2C and Figure 4B, respectively. Ofstad et al., 2011 report that visual homing abilities are lost for fruit flies with a blocked EB of the CX but not MB, which is predicted by our model if animals have learned target-facing views to which they can later align using their RF guidance system. Analysis of animal’s orientation during learning is thus vital to unpacking precisely how the above results arise.

With the elemental guidance strategies defined, we propose that their outputs are coordinated through the combined action of the MBs and CX. Specifically, we demonstrate that a pair of ring attractor networks that have similar connectivity patterns of the CX-based head-direction system (Kim et al., 2017; Turner-Evans et al., 2019; Pisokas et al., 2019), are sufficient for optimally weighting multiple directional cues from the same frame of reference (e.g. VH and PI). The use of a pair of integrating RAs is inspired by the column structure of the FB which has 16 neural columns divided into two groups of 8 neural columns that each represent the entire 360°space. The optimal integration of PI and VH using a ring attractor closely matches the networks theorised to govern optimal directional integration in mammals (Jeffery et al., 2016) and supports hypothesis of their conserved use across animals (Mangan and Yue, 2018). Optimality is secured either through adapting the shape of the activity profile of the input as is the case for PI which naturally scales with distance, or by using a standardised input activity profile with cross-inhibition of competing cues as is the case for VH in the model. The later schema avoids the need for ever increasing neural activity to maintain relevance.

To replicate the suite of navigational behaviours described in Figure 1, our network includes three independent ring attractor networks: the global compass head direction system Pisokas et al., 2019; the local compass head direction system (Seelig and Jayaraman, 2015; Kim et al., 2017; Turner-Evans et al., 2019); and an Off-route integration system (modelled here). We would speculate that it is likely that central place foraging insects also possess a similar integration network for ‘On-Route’ cues (not modelled here) bringing the total number of RAs to four. The utility of RAs for head-direction tracking arises from their properties in converging activity to a signal bump that can easily be shifted by sensory input and is maintained in the absence of stimulation. In addition, RAs also possess the beneficial property that they spontaneously weight competing sensory information stored as bumps of activity in an optimal manner. Thus, there are excellent computational reasons for insects to invest in such neural structures. Yet, it should be clear that the model proposed here represents a proof-of-concept demonstrating that the underlying network architectures already mapped to the CX (directional cues encoded as bumps of activity Seelig and Jayaraman, 2015; Heinze and Homberg, 2007; various lateral shifting mechanisms (Stone et al., 2017; Green et al., 2017; Turner-Evans et al., 2017); RAs [Kim et al., 2017; Turner-Evans et al., 2019; Pisokas et al., 2019]) are sufficient to generate adaptive navigation but further studies are required to critique and refine the biological realism of this hypothesis.

While this assemblage recreates optimal integration of strategies that share a compass system, it does not easily extend to integration of directional cues from other frames of reference (e.g. VH and PI reference the global compass versus RF that references a local compass). Indeed as the CX steering network seeks to minimise the difference between a current and a desired heading, calibrating input signals from different frames of reference would require a similar calibration of their respective compass systems. Rather, the proposed model incorporates a context-dependent non-linear switching mechanism driven by the output of the MB that alternates between strategies: global compass based PI and VH are triggered when the surroundings are unfamiliar, but when in familiar surroundings engage local compass-based RF. In summary, the adaptive behaviour demonstrated is the result of distinct guidance systems that converge in the CX, with their relative weighting defined by the output of the MB. This distributed architecture is reminiscent of mechanisms found in the visual learning of honeybees (Plath et al., 2017), and supports the hypothesis that the CX is the navigation coordinator of insects (Heinze, 2017; Honkanen et al., 2019) but shows how the MB acts as a mediator allowing the CX to generate optimal behaviour according to the context.

The resultant unified model of insect navigation Figure 1B and C represents a proof-of-principle framework as to how insects might co-ordinate core navigational behaviours (PI, VH and RF) under standard field manipulations Figure 1A. Neuroanatomical data has been drawn from across insect classes (see Table 2) to ensure neural realism where possible with performance compared to ant navigation behaviour in a single simulated desert ant habitat. The framework can be easily extended to new navigation behaviours observed in other insects from idiothetic PI (Kim and Dickinson, 2017) to straight line following El Jundi et al., 2016 to migrations (Reppert et al., 2016) as well as more nuanced strategies that flexibly use directional cues from different sensory modalities (Wystrach et al., 2013; Schwarz et al., 2017; Dacke et al., 2019). A priority of future works should be the investigation of the differences and commonalities in sensory systems, neural structures and ecology of different insect navigators and how they impact behaviour allowing for extension and refinement of the framework for different animals. Complementary stress-testing of models across different environments in both simulation and robotic studies are also required to ensure that model performance generalises across species and habitats and to provide guidance to researchers seeking the sensory, processing and learning circuits underpinning these abilities.

Table 2
The details of the main neurons used in the proposed model.
NameFunctionNumNetworkBrain regionNeuron in species(e.g.)Reference
I-TB1Global compasscurrent heading8Ring attractorCXTB1 in Schistocerca gregariaand Megalopta genalisHeinze and Homberg, 2008; Stone et al., 2017
II-TB1Local compasscurrent heading8Ring attractorΔ7 in DrosophilaFranconville et al., 2018
S I-TB1Copy of shiftedglobal heading8RingNo data/
VH-LVH desiredheading left8RingNo data
VH-RVH desiredheading right8RingNo data
PI-LPI desiredheading left8RingCPU4 in Schistocerca gregariaand Megalopta genalisHeinze and Homberg, 2008; Stone et al., 2017
PI-RPI desiredheading right8RingP-F3N2v in DrosophilaFranconville et al., 2018
RF-LRF desiredheading left8RingNo data/
RF-RRF desiredheading right8RingNo data
RA-LCue integrationleft8Ring attractorNo data
RA-RCue integrationright8Ring attractorNo data
CPU1Comparing thecurrent anddesired heading16Steering circuitCPU1 in Schistocerca gregaria and Megalopta genalis PF-LCre in DrosophilaHeinze and Homberg, 2008; Stone et al., 2017Franconville et al., 2018
vPNvisual projection81Associative learningMBMB neurons in DrosophilaAso et al., 2014
KCsKenyon cells4000CamponotusEhmer and Gronenberg, 2004
MBONvisual novelty1Apis melliferaRybak and Menzel, 1993
TUNTuning weightsfrom PI to RA1/SMPNo data/
SN1Turn on/off theRF output to CPU11Switch circuitNo data
SN2Turn on/off theRA output to CPU11Switch circuitNo data

Materials and methods

All source code related to this publication is available for download at https://github.com/XuelongSun/InsectNavigationToolkitModelling (Sun et al., 2020 ; copy archived at https://github.com/elifesciences-publications/InsectNavigationToolkitModelling). All simulations and network models are implemented by Python 3.5 and make use of external libraries-numpy, matplotlib, scipy, PIL and cv2.

Simulated 3D world

Request a detailed protocol

The environment used in this study is that provided by Stone et al., 2018 which is itself adapted from Baddeley et al., 2012 (see Figure 7C). It is a virtual ant-like world consisting of randomly generated bushes, trees and tussocks based on triangular patches (for more details see Baddeley et al., 2012). Therefore, the data of this simulated world is stored in a matrix with the size of NP×3×3, defining the three dimensional coordinates (x,y,z) of the three vertices of NP (number of patches) triangle patches. Agent movement was constrained to a 20m×20m training and test area allowing free movement without the requirement of an additional obstacle avoidance mechanism.

Image reconstruction

Request a detailed protocol

The agent’s visual input at location (x,y) with the heading direction θh is simulated from a point 1 cm above from the ground plane with field of view 360 wide by 90 high (centred on the horizon). This panoramic image (300×104) is then wrapped onto a sky-centred disk as required by the Zernike Moments transformation algorithm used with the size of 208(104×2)×208 ready for image processing (see Figure 7D upper).

Image processing

Frequency encoding conceptual overview

Request a detailed protocol

Image compression algorithms such as JPEG encoding Hudson et al., 2018 have long utilised the fact that a complex signal can be decomposed into a series of trigonometric functions that oscillate at different frequencies. The original signal can then be reconstructed by summing all (for prefect reconstruction) or some (for approximate reconstruction) of the base trigonometric functions. Thus, compression algorithms seek a balance between using the fewest trigonometric functions to encode the scene (for example, by omitting high frequencies that humans struggle to perceive), and the accuracy of the reconstructed signal (often given as an option when converting to JPEG format). Figure 7A provides a cartoon of the frequency decomposition process for a panoramic view.

When such transforms are applied to fully panoramic images, or skylines, benefits beyond compression arise. Specifically, discrete transformation algorithms used to extract the frequency information generate a series of information triplets to describe the original function: frequency coefficients describe the frequency of the trigonometric function with associated amplitudes and phase values defining the vertical height versus the mean and the lateral position of the waveform respectively (Figure 7A). For panoramic views, regardless of the rotational angle of the image capturing device (eye or camera) the entire signal will always be visible and hence the amplitudes of the frequency coefficients do not alter with rotation (Figure 7B). This information has been used for successful place recognition in a series of robot studies (Pajdla and Hlaváč, 1999; Menegatti et al., 2004; Stone et al., 2016). Most recently Stone et al., 2018 demonstrated that the difference between the amplitudes of the frequency coefficients recorded at two locations increases monotonically with distance producing an error surface suitable for visual homing. This feature of the frequency encoding underlies the visual homing results described in Mushroom bodies as drivers of rotational invariant visual homing.

In addition, as the phase of each coefficient describes how to align the signal this will naturally track any rotation in the panoramic view (Figure 7B) providing a means to realign with previous headings. The phase components of panoramic images have been utilised previously to derive the home direction in a visual homing task (Stürzl and Mallot, 2006). This feature of the frequency encoding underlies the route following results described in Route following in the insect brain.

The image processing field has created an array of algorithms for deriving the frequency content of continuous signals (Jiang et al., 1996; Gonzalez et al., 2004). To allow exploration of the usefulness of frequency information, and how it could be used by the known neural structures, we adopt the same Zernike Moment algorithm used by Stone et al., 2018, but the reader should be clear that there are many alternate and more biologically plausible processes by which insects could derive similar information. It is beyond the scope of this proof of concept study to define precisely how this process might happen in insects but future research possibilities are outlined in the Discussion.

Zernike Moments encoding

Request a detailed protocol

Zernike Moments (ZM) are defined as the projection of a function onto orthogonal basis polynomials called Zernike polynomials (Teague, 1980; Khotanzad and Hong, 1990). This set of functions are defined on the unit circle with polar coordinates (ρ,θ) shown as:

(1) Vnm(ρ,θ)=Rnm(ρ)ejmθ

where nN+ is the order and m is the repetition meeting the condition: mN, |m|n and n-|m| is even to ensure the rotational invariant property is met. Rnm(ρ) is the radial polynomial defined as:

(2) Rnm(ρ)=s=0n-|m|/2(-1)s(n-s)!s!(n+|m|2-s)!(n-|m|2-s)!ρn-2s

For a continuous image function f(x,y), the ZM coefficient can be calculated by:

(3) Znm(ρ)=n+1πx2+y21f(x,y)Vnm*(ρ,θ)𝑑x𝑑y

For a digital image, summations can replace the integrals to give the ZM:

(4) Znm(ρ)=n+1πxyf(x,y)Vnm*(ρ,θ),x2+y21.

ZM are extracted from the simulated insect views in wrapped format (Figure 7D) whose centre is taken to be the origin of the polar coordinates such that all valid pixels lie within the unit circle. For a given image I (P1 in Figure 7D) and the rotated version of this image Iθ, (P2 in Figure 7D), the amplitude A=|Z| and phaseΦ=Z of ZM coefficients of these two images will satisfy:

(5) {|Znmθr|=|Znmejmθr|=|Znm|i.e.,Anmθr=AnmΦnmθr=Φnmmθr

From which we can see that the amplitude of the ZM coefficient remains the same while the phase of ZM carries the information regarding the rotation (see Figure 7A and D). This property is the cornerstone of the visual navigation model where the amplitudes encode the features of the view while the phase defines the orientation.

Amplitudes for ZM orders ranging from n=0 to n=16 were selected as they appeared to cover the majority of information within the image. From Equation 1, we know that Vn,m=Vn,-m, so we limited mN+ to reduce the computational cost, which sets the total number of ZM coefficients (NZM) to (16÷2+1)2=81 which was input to the visual navigation networks. For training the ANN network for RF, in Equation 5, if we set m=1, such that Φn,1θr=Φn,1-θr which means that all ZM coefficients will provide the same information when the image is rotated. Further, the difference between the phase of ZM coefficients of the current view with those of the memorised view, will inherently provide the angle with which to turn to realign oneself, that is:

(6) Φ7,1current-Φ7,1memory=θh-θm

where the order n of this ZM is selected to be n=7 manually by comparing the performance with different orders in this specific virtual environment, θh is the current heading of the agent while θm is the memorised heading direction (desired heading direction).

Neural networks

We use the simple firing rate to model the neurons in the proposed networks, where the output firing rate C is a sigmoid function of the input I if there is no special note. In the following descriptions and formulas, a subscript is used to represent the layers or name of the neuron while the superscript is used to represent the value at a specific time or with a specific index.

Current headings

Request a detailed protocol

In the proposed model, there are two independent compass systems based on the global and the local cues respectively so named global and local compass correspondingly. These two compass systems have similar neural pathways from OL via AOTU and BU to the CX but ended distinct groupings of TB1 neurons: I-TB1 and II-TB1 in the PB.

Global compass
Request a detailed protocol

The global compass neural network applied in this study is the same as that of Stone et al., 2017, which has three layers of neurons: TL neurons, CL1 neurons and I-TB1 neurons. The 16 TL neurons respond to simulated polarised light input and are directly modelled as:

(7) ITL=cos(θTL-θh)

where θTL{0,π/4,π/2,3π/4,π,5π/4,3π/2,7π/4} is the angular preference of the 16 TL-neurons. The 16 CL1-neurons are inhibited by TL-neuron activity which invert the polarisation response:

(8) ICL1=1.0-CTL

The 8 I-TB1 neurons act as a ring attractor creating a sinusoidal encoding of the current heading. Each I-TB1 neuron receives excitation from the CL1 neuron sharing the same directional preference and inhibition from other I-TB1 neurons via mutual connections:

(9) WI-TB1ij=cos(θI-TB1i-θI-TB1j)-12
(10) II-TB1t,j=(1-c)CCL1t,j+ci=18WI-TB1ijCI-TB1t-1,j

where c is a balance factor to modify the strength of the inhibition and the CL1 excitation. Finally, the population coding CI-TB1t,j,j=0,1,7 represents the heading of global compass of the agent at time t.

Local compass

The local compass is derived from the terrestrial cues through a similar visual pathway as the global compass and also ends in a ring attractor network. As for the global compass, the local compass heading is directly modelled by the population encoding of II-TB1 neurons:

(11) CII-TB1i=cos(Φ7,1-θII-TB1i) i=0,1,7

where θII-TB1 is the angular preference of the II-TB1 neurons and Φ7,1 is the phase of ZM. Therefore, the firing rate of CII-TB1 encodes the heading of the local compass.

Visual homing

Request a detailed protocol

The neural network of visual homing is an associative network constrained by the anatomical structure of the mushroom body (MB) of the insects. In contrast to Ardin et al., 2016 where a spiking neural network is implemented to model the MB, we apply a simple version of MB where the average firing rates of neurons are used.

The visual projection neurons (vPNs) directly receive the amplitudes of the ZM coefficients as their firing rates:

(12) CvPNi=Ai,i=0,1,2NvPN

where NvPN is the number of the vPN neurons which is the same as the total number of ZM amplitudes applied and in this study NvPN=NZM=81. The Ai denotes the ith amplitudes of ZM coefficients.

The vPNs project into Kenyon cells (KC) through randomly generated binary connections WvPN2KC, which result in the scenario wherein one KC receives 10 randomly selected vPNs’ activation:

(13) IKCj=i=0NvPNWvPN2KCjiCvPNi

where IKCj denotes the total input current of jth KC from the vPN and the KCs are modelled as binary neurons with the same threshold Thrkc:

(14) CKC={0ifIKCThrKC1ifIKC>ThrKC

The MBON neuron sums all the activation of Kenyon cells via plastic connections WKC2EN:

(15) CMBON=i=0NKCWKC2MBONiCKCi

An anti-Hebbian learning rule is applied for the plasticity of WKC2MBON in a simple way:

(16) WKC2MBONt=WKC2MBONt-1-ηKC2MBON ifCKCiWKC2MBONi

where ηKC2MBON is the learning rate. The learning process will happen only when the reward signal is turned on. The activation of EN CMBON represents the familiarity of the current view and the change of the CMBON is defined as:

(17) ΔCMBON=CMBONt-CMBONt-1

ΔCMBON is used to track the gradient of the familiarity to guide the agent to the more familiar locations by shifting the I-TB1 neurons’ activation CI-TB1.

(18) CVHi=CITB1j,j={i+offsetifi+offset7i+offset7otherwisei=0,1,...7

The relationship between the ΔCMBON and the offset is shown as following:

(19) offset={0ifΔCMBON<0min(kVHΔCMBON,4)otherwise

Path integration

Request a detailed protocol

The PI model implemented is that published by Stone et al., 2017. The core functionality arises from the CPU4 neurons that integrate the activation of TN2 neurons that encode the speed of the agent and the inverted activation of direction-sensitive I-TB1 neurons. The result is that the population of CPU4 neurons iteratively track the distance and orientation to the nest (a home vector) in a format akin to a series of directionally locked odometers.

The firing rate of the CPU4 neurons are updated by:

(20) ICPU4t=ICPU4t-1+r(CTN2t-CI-TB1t-k)

where the rate of the memory accumulation r=0.0025; the memory loss k=0.1; the initial memory charge of CPU4 neurons ICPU40=0.1.

The input of the TN2 neurons encoding the speed is calculated by:

(21) {ITN2L=[sin(θh+θTN2)cos(θh+θTN2)]𝒗ITN2R=[sin(θh-θTN2)cos(θh-θTN2)]𝒗

where 𝒗 is the velocity (see Equation 39) of the agent and θTN2 is the preference angle of the TN2 neurons. In this study θTN2=π/4. The activation function applied to TN2 neurons is the rectified linear function given by:

(22) CTN2=max(0,2ITN2)

As CPU4 neurons integrate the speed and direction of the agent, the desired heading of PI can be represented by the population encoding of these neurons, thus:

(23) CPI=CCPU4

Route following

Request a detailed protocol

The route following model is based on a simple artificial neural network (ANN) with just one hidden layer. The input layer directly takes the amplitudes of the ZM coefficients as the activation in the same way as that of visual projection neurons in MB network. This is a fully connected neural network with the sigmoid activation function, so the forward propagation is ruled by:

(24) {Zli=i=0NWjiYl1jYil=sigmoid(Zli)=11+eZlii=0,1,...7andl=0,1,2

where Zli and Yli denote the input and output of the ith neuron in lth layer, thus the input is the same as the MB network Z0i=Ai,i=0,1,NZM and the output of the ANN is consequently the population coding of the RF desired heading, that is:

(25) CRFi=Yi2 i=0,1,7

For a fast and efficient implementation, the learning method applied here is back propagation with gradient descend. Training data is derived from the amplitudes and the population encoded phases of the ZM coefficients of the images reconstructed along a habitual route. As shown in Equation 11 the II-TB1 neurons encode the heading of local compass, therefore, the training pair for the RF network can be defined as {A,CII-TB1}. After training, this network will correlate the desired ZM phase with the specific ZM amplitudes, and when RF is running, the output of this neural network CRF will represent the desired heading with respect to the current heading of the local compass represented by the population encoding of II-TB1 neurons.

Coordination of elemental guidance strategies

Request a detailed protocol

The coordination of the three main navigation strategies PI, VH and RF are realised in distinct stages. Firstly, Off-route strategies (PI and VH) are optimally integrated by weighing according to the certainly of each before a context-dependent switch activates either On-route (RF) or Off-route strategies depending on the current visual novelty.

Optimal cue integration

Request a detailed protocol

A ring attractor neural network is used to integrate the cues from the VH and PI guidance systems. As reported in Hoinville and Wehner, 2018 summation of directional cues represented in vector format leads to optimal angular cue integration which is the same case as real insects. Mangan and Yue, 2018 gave a biology plausible way to do this kind of computation based on a simple ring attractor neural network. There are two populations of neurons in this network, the first is the integration neurons (IN) which is the output population of the network. Constrained by the number of columns in each hemisphere of the insects CX, we set the number of the IN to be 8, and its firing rate is updated by:

(26) τdCINdt=-CIN+g(j=1nWE2EjiCINj+X1i+X2i+WI2ECUI) i=0,1,7.

where WE2Eji is the recurrent connections from jth neuron to ith neuron, g(x) is the activation function that provides the non-linear property of the neuron:

(27) g(c)=max(0,ρ+c)

where ρ denotes the offset of the function.

In Equation 26, X1 and X2 generally denote the cues that should be integrated. In this study, X1 and X2 represent the desired heading of path integration (CPI) and visual homing (CVH). The desired heading of PI is also tuned by the tuning neuron (TUN) in SMP which is stimulated by the MBON of MB (see Figure 3A) and its activation function is defined by a rectified linear function, that is:

(28) CTUN=min(kTUNCEN,1)

where kTUN is the scaling factor.

Thus, the X1 and X2 for this ring attractor network can be calculated by:

(29) {X1i=CTUNCPIiX2i=CVHii=0,1,...7

The second population of the ring attractor is called the uniform inhibition (UI) neurons modelled by:

(30) τdCUIdt=-u+g(WI2ICUI+WE2Ik=1nCINk) i=0,1,7.

After arriving at a stable state, the firing rate of the integration neurons in this ring attractor network provides the population encoding of the optimal integrated output COI:

(31) COI=CCN

Context-dependent switch

The model generates two current/desired headings pairs: the current heading of global compass decoded by CI-TB1 with the desired heading optimally integrated by the integration neurons of the ring attractor network COI and the current heading of local compass decoded by II-TB1 neurons CII-TB2 with the desired heading decoded by the output of the RF network CRF. These two pairs of signal both are connected to the steering circuit (see Figure 5A and Steering circuit) but are turned on/off by two switching neurons (SN1 and SN2) in the SMP (Figure 5A). SN2 neuron receives the activation from MBON neuron and is modelled as:

(32) SN2={0ifCMBON<ThrSN21otherwise

While SN1 will always fire unless SN2 fires:

(33) SN1={0ifCSN2=11otherwise

Therefore, the context-depend switch is achieved according to the current visual novelty represented by the activation of MBON.

Steering circuit

Request a detailed protocol

The steering neurons, that is CPU1 neurons (CCPU1i,i=0,1,215) receive excitatory input from the desired heading (CDHi,i=0,1,215) and inhibitory input from the current heading (CCH,i=0,1,215) to generate the turning signal:

(34) CSTi=CDHi-CCHi i=0,1,15

The turning angle is determined by the difference of the activation summations between left (i=0,1,27) and right (i=8,9,1015) set of CPU1 neurons:

(35) θM=kmotor(i=07CCPU1-i=815CCPU1)

which corresponds to the difference of the length of the subtracted left and right vectors in Figure 2A. In addition, as it is illustrated in Figure 2A, another key part of steering circuit is the left/right shifted desired heading, in this paper, this is achieved by the offset connectivity pattern (WDH2CPU1L and WDH2CPU1R) from the desired heading to the steering neurons (Heinze and Homberg, 2008; Stone et al., 2017):

(36) {CDH0-7=CSN1CRFWDH2CPU1L+CSN2COIWDH2CPU1LCDH8-15=CSN1CRFWDH2CPU1R+CSN2COIWDH2CPU1R

Where the WDH2CPU1L and WDH2CPU1R are:

(37) WDH2CPU1L=[0100000000100000000100000000100000000100000000100000000110000000]WDH2CPU1R=[0000000101000000001000000001000000001000000001000000001000000001]

which defines the connection pattern realising the left/right shifting of the desired headings used throughout our model (Figure 2A, Figure 3A, Figure 4A, Figure 5A and Figure 6A.

Information provided by frequency encoding in cartoon and simulated ant environments.

(A): A cartoon depiction of a panoramic skyline, it’s decomposition into trigonometric functions, and reconstruction through the summation of low frequency coefficients reflecting standard image compression techniques. (B): Following a 90° rotation there is no change in the amplitudes of the frequency coefficients but the phases of the frequency coefficients track the change in orientation providing a rotational invariant signal useful for visual homing and rotationally-varying signal useful for route following, respectively. (C): The simulated 3D world used for all experiments. The pink area (size: 20m×20m) is used for model training and testing zone for models allowing obstacle-free movement. (D): The frequency encoding (Zernike Moment’s amplitudes and phase) of the views sampled from the same location but with different headings (P1 and P2 in (C), with 90 heading difference) in the simulated world. The first 81 amplitudes are identical while the phases have the difference of about 90.

The current heading input to the steering circuit is also switched between global and local compass input via the SN1 and SN2 neuron:

(38) {CCH0-7=CSN1CII-TB1+CSN2CI-TB1CCH8-15=CSN1CII-TB1+CSN2CI-TB1

Detailed neural connectivity of unified model

Request a detailed protocol

Figure 6A shows a complete picture of the proposed model. Specifically, it highlights the final coordination system showing that CX computing the optimal navigation output with the modulation from the MB and SMP. In addition, offset connectivity pattern from the desired heading to the steering circuit that underpin the left/right shifting is clearly shown. Figure 6B and C shows the network generating the desired heading of RF and VH respectively.

In addition, Table 2 provides details of all modelled neural circuits with their function and naming conventions with links to biological evidence for these neural circuits where it exists and the animal that they were observed in.

Simulations

Equation 35 gives the turning angle of the agent, thus the instantaneous "velocity" (𝒗) at every step can be computed by:

(39) 𝒗t=SL[cosθMt,sinθMt]

where SL is the step length with the unit of centimetres. Note that we haven’t defined the time accuracy for every step of the simulations, thus the unit of the velocity in this implementation is cm/step rather than cm/s. Then the position of agent 𝑷t+1 in the Cartesian coordinates for the is updated by:

(40) 𝑷t+1=𝑷t+𝒗t

The main parameter settings for all the simulations in this paper can be found in Table 1.

Reproduce visual navigation behaviour

Request a detailed protocol

Inspired by the benchmark study of real ants in Wystrach et al., 2012, we test our model of VH and RF by reproducing the homing behaviours in that study. This is achieved by constructing a habitual route with a similar shape (arc or banana shape) in our simulated 3D world. The position 𝑷R-Arc and heading θR-Arc along that route is manually generated by:

(41) {θRArci=π2iπ2NMPRArci=[RsinθRArci,7+RcosθRArci]i=0,1NM

where the R=7m is the radius of the arc and NM=20 in this case is the number of the sampling points where view images are reconstructed along the route. The reconstructed views then be wrapped and decomposed by ZM into amplitudes and phases are used to train the ANN network of RF and MB network of VH.

Visual homing

Request a detailed protocol

After training, 12 agents with different initial headings that were evenly distributed in [0,360) were released at the sideways release point (𝑷=[0,-7]) for the simulation of VH (Figure 2D). The headings of the agents at radius 2.5 m from the release point (manually selected to ensure that the all the agents have completed any large initial loop) are taken as the initial headings.

Route following

After training, 2 agents with 0° and 180° are released at the different release points (P=[9,7],[8,7],[7,7],[6,7],[5,7]) for the simulation of RF (see Figure 4B) to generate the homing path. And then, we release 12 agents on the route (𝑷=[-7,-7]) with different initial headings that is evenly distributed in [0,360) to compare the results with the real ant data in Wystrach et al., 2012. The heading of each agent at the position that is 0.6m from the release point is taken as the initial heading.

Reproduce the optimal cue integration behaviour

Request a detailed protocol

We evaluated the cue integration model by reproducing the results of Wystrach et al., 2012 and Legge et al., 2014. The ants’ outbound routes in Wystrach et al., 2015 is bounded by the corridor, so here we simulate the velocity of the agent by:

(42) 𝒗outt=[rand(0,2V0)-V0,V0],t=0,1Tout

where the function rand(0,x) generates a random value from the uniform distribution of [0,x], thus the speed of x-axis will be in [-V0,V0] and will cancel each other during the forging. The speed of y-axis is constant so it will accumulated and be recorded by the PI model. And V0=1cm/step is the basic speed of the agent and Tout is the total time for outbound phase determining the length of the outbound route. As for the simulated homing route, we duplicate the outbound route when Tout=300 but with a inverted heading direction. And then the visual navigation network was trained with images sampled along a simulated route (grey curve in Figure 3B).

Tuning PI uncertainty

Request a detailed protocol

The agent in this simulation was allowed to forage to different distances of 0.1m, 1m, 3m or 7m from the nest to accrue different PI states and directional certainties before being translated to a never-before-experienced test site 1.5m from the nest. (RP1 in Figure 3B). For each trial, we release 20 agents with different initial headings that is evenly distributed in [0,360). The headings of every agent at the position that is 0.6m from the start point is taken as the initial headings, and the mean direction and the 95% confidential intervals are calculated. As in the biological experiment, the angle between the directions recommended by the PI and visual navigation systems differed by approximately 130°.

As the length of the home vector increase (0.1m -> 7m) the activation of PI memory becomes higher (Figure 3B), and increasingly determines the output of the ring attractor integration. Since the length of the home vector is also encoded in the activation of the PI memory neurons, the ring attractor can extract this information as the strength of the cue. As the visual familiarity is nearly the same in the vicinity of the release point, the strength of visual homing circuit remains constant and has more of an influence as the PI length drops.

Tuning visual uncertainty

Request a detailed protocol

The agent in this simulation was allowed to forage up to 1m from the nest to accrue its PI state and directional certainty before being translated to three different release points (RP1, RP2 and RP3 in Figure 3B). As the distance from nest increases (RP1->RP2->RP3) so does the visual uncertainty. For each trial, we release 12 agents with different initial headings that is evenly distributed in [0,360). The headings of each agent at the position that is 0.3m from the start point is taken as the initial headings, and the mean direction and the 95% confidential intervals are calculated.

Whole model

Request a detailed protocol

The simulated habitual route remains the same as in the simulation of visual navigation (Reproduce visual navigation behaviour) as is the learning procedure. The zero- and full- vector agents are both released at [-2,-7] with the heading 0° and 90°, respectively. The full-vector agent’s PI memory is generated by letting the agent forage along the route from nest to feeder.

References

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8
    Insect navigation en route to the goal: multiple strategies for the use of landmarks
    1. T Collett
    (1996)
    The Journal of Experimental Biology 199:227–235.
  9. 9
  10. 10
  11. 11
  12. 12
  13. 13
  14. 14
  15. 15
  16. 16
  17. 17
  18. 18
  19. 19
  20. 20
  21. 21
  22. 22
  23. 23
  24. 24
  25. 25
    Digital image processing using MATLAB
    1. RC Gonzalez
    2. RE Woods
    3. SL Eddins
    (2004)
    Pearson Education India.
  26. 26
  27. 27
  28. 28
  29. 29
  30. 30
  31. 31
  32. 32
  33. 33
  34. 34
  35. 35
  36. 36
  37. 37
  38. 38
    Learning and retrieval of memory elements in a navigation task
    1. T Hoinville
    2. R Wehner
    3. H Cruse
    (2012)
    Conference on Biomimetic and Biohybrid Systems. pp. 120–131.
  39. 39
  40. 40
  41. 41
  42. 42
    Pattern discrimination by the honeybee: disruption as a cue
    1. GA Horridge
    (1997)
    Journal of Comparative Physiology A: Sensory, Neural, and Behavioral Physiology 181:267–277.
    https://doi.org/10.1007/s003590050113
  43. 43
  44. 44
  45. 45
  46. 46
  47. 47
  48. 48
    Invariant image recognition by Zernike moments
    1. A Khotanzad
    2. YH Hong
    (1990)
    IEEE Transactions on Pattern Analysis and Machine Intelligence 12:489–497.
    https://doi.org/10.1109/34.55109
  49. 49
  50. 50
  51. 51
  52. 52
    Route following without scanning
    1. A Kodzhabashev
    2. M Mangan
    (2015)
    Conference on Biomimetic and Biohybrid Systems. pp. 199–210.
  53. 53
  54. 54
  55. 55
  56. 56
  57. 57
    An analysis of a ring attractor model for cue integration
    1. MX Mangan
    2. S Yue
    (2018)
    Conference on Biomimetic and Biohybrid Systems. pp. 459–470.
  58. 58
  59. 59
  60. 60
  61. 61
  62. 62
  63. 63
  64. 64
  65. 65
    Zero phase representation of panoramic images for image based localization
    1. T Pajdla
    2. V Hlaváč
    (1999)
    International Conference on Computer Analysis of Images and Patterns. pp. 550–557.
  66. 66
  67. 67
  68. 68
  69. 69
  70. 70
  71. 71
  72. 72
  73. 73
  74. 74
  75. 75
  76. 76
  77. 77
  78. 78
  79. 79
  80. 80
  81. 81
  82. 82
  83. 83
  84. 84
    Attractor network models of head direction cells
    1. DS Touretzky
    (2005)
    In: SI Wiener, JS Taube, editors. Head Direction Cells and the Neural Mechanisms of Spatial Orientation. MIT Press. pp. 411–432.
  85. 85
  86. 86
  87. 87
    The internal maps of insects
    1. B Webb
    (2019)
    The Journal of Experimental Biology 222:jeb188094.
    https://doi.org/10.1242/jeb.188094
  88. 88
    The architecture of the desert ant’s navigational toolkit
    1. R Wehner
    (2009)
    Myrmecological News 12:85–96.
  89. 89
  90. 90
  91. 91
  92. 92
    Optimal cue integration in ants
    1. A Wystrach
    2. M Mangan
    3. B Webb
    (2015)
    Proceedings of the Royal Society B: Biological Sciences 282:20151484.
    https://doi.org/10.1098/rspb.2015.1484
  93. 93
  94. 94
  95. 95
  96. 96
    Structure and function of learning flights in ground-nesting bees and wasps
    1. J Zeil
    2. A Kelber
    3. R Voss
    (1996)
    The Journal of Experimental Biology 199:245–252.
  97. 97
  98. 98
  99. 99

Decision letter

  1. Mani Ramaswami
    Reviewing Editor; Trinity College Dublin, Ireland
  2. Michael B Eisen
    Senior Editor; University of California, Berkeley, United States
  3. Stanley Heinze
    Reviewer; Lund University, Sweden

In the interests of transparency, eLife publishes the most substantive revision requests and the accompanying author responses.

Acceptance summary:

This work builds upon prior models to propose an integrated model for computational strategies used by insects to perform their remarkable navigational feats. This new model accounts several capabilities and the flexibilities that accomplished insect navigators display such as visual homing, which were not as well accounted for earlier. The integrated model is not only an important addition to the literature on the insect central complex, but also particularly valuable because it pins specific computational functions on specific anatomical structures, making predictions that are potentially testable in the near-medium term.

Decision letter after peer review:

Thank you for submitting your article "A Decentralised Neural Model Explaining Optimal Integration Of Navigational Strategies in Insects" for consideration by eLife. Your article has been reviewed by three peer reviewers, and the evaluation has been overseen by Mani Ramaswami as Reviewing Editor and Michael Eisen as the Senior Editor. The following individual involved in review of your submission has agreed to reveal their identity: Stanley Heinze (Reviewer #2).

The reviewers have discussed the reviews with one another and the Reviewing Editor has drafted this decision to help you prepare a revised submission.

Summary:

This is an original, focussed and timely study on a topic of considerable interest: computational strategies used by insects to perform their remarkable navigational feats. The authors identify shortcomings in existing models -specifically, that they do not account for the entire range of capabilities and the flexibility that the most accomplished of insect navigators display such as visual homing, i.e., the ability of the ant to return to familiar region from novel locations. They then integrate and build upon prior models to successfully fill these gaps. The integrated model is particularly valuable because it pins specific computational functions on specific anatomical structures, most notably the central complex and the mushroom body. It is an important addition to both the literature on the insect central complex, as well as to more theoretical navigational work, in particular as many predictions can be made based on the presented models making it, in principle, testable in the near-medium term. The figures are well made and the writing is compact. Nevertheless, several points need to be addressed before publication.

Essential revisions:

1) Accessibility to a broad readership. While the general text is written very well and the content is highly interesting for a life science (in particular insect neuroscience) audience, the Materials and methods section and some aspects of the reasoning behind the model are very technical. Even insect neurobiologists among the reviewers struggled to follow large parts of the methods and had never heard of Zernike moments for instance. The text should be revised to include some more intuitive and broadly accessible language that would allow a biologist to grasp at least the key principles of what is done by those initial analyses of the visual information in the model. A schematic illustration as to what Zernike moments are, maybe combined with some simple examples might help a lot. This is important as the paper is not only directed towards computational biologists, but is highly relevant also for physiologists, anatomists and behavioralists, most of whom would probably fail to grasp the essence of the new principles presented. In similar vein, the authors should ask a mammalian researcher to read the article and provide them with feedback on how accessible they found it. Simple terminology/concepts/structure names in the Abstract/Introduction should not be used until they have been introduced properly, e.g., 'route following', 'visual homing', 'anterior optic tubercle'.

2) On a similar note, the article builds on a lot of prior modeling literature in the insect navigation field, particularly the work from Barbara Webb and colleagues. Important concepts/algorithmic strategies need to be more fully explained here (with appropriate citations) rather than just being referred to in the prior literature. The Materials and methods section does a good job of this, but the Results section could benefit from more explanation to guide unfamiliar readers.

3) It is entirely reasonable that the authors combine experimental and modeling work from a range of different insect species to build different pieces of their own model. By and large they are careful to state which is which. However, they could make it clearer which assumptions are based on experimental data and which are based on prior models (i.e., not actual data). As an example, although the mushroom body has been suggested by numerous modeling studies and conceptually driven reviews to be involved in visual navigation, the experimental evidence for this is lacking, and their precise role is far from well-established.

4) It is excellent that the authors integrate useful components from prior models to construct their integrated model. Although the figures go some way towards clarifying how the different pieces might fit together, it would be useful to make even clearer what is entirely novel here and what is derived/integrated from previous work. In addition, although the authors make a testable case for the involvement of the fan-shaped body in a series of different navigational computations, controlled by the mushroom body, the figures are still somewhat complex and confusing. These should be clarified for the broader readership.

5) Neuroanatomical correspondence of model details: The paper claims that the model is in most parts biologically constrained and that most elements can be mapped onto known neurons. Where this was not possible (route following) the authors speculated about the possible implementations. While on the levels of neuropil groups this is all quite true, the details, especially in the central complex, are less clear and many of the proposed circuits have no known counterpart in any insect brain to date. This is not saying that those parts of the model are not realistic or interesting, but that the claim that they correspond to existing neurons in the central complex, is slightly misleading. Below series of obvious mix ups of cell types below, which need to be corrected (5.1), but additionally, it should be clearly stated where the model does not (yet) have a solid grounding in biology (see point 5.2). Finally, the speculative route following implementation seems at odds with neurophysiological data from various species and alternative pathways and implementations seem more likely (point 5.3).

5.1) Subsection “Mushroom Bodies As Drivers of Rotational Invariant Visual Homing”: CPU3 neurons are supposed to be a mirrored TB1 ring attractor network? Is this really what the authors want to say? CPU3 neurons are known in locusts (Heinze and Homberg, 2008), but connect the PB with the FB as columnar cells. If the authors mean CPU4 cells, these neurons are also not forming a ring-network (even though they could receive shifted compass information from TB1 cells by some means). Most simply, would not a parallel set of TB1 cells be optimally suited for this task? There are four TB1 cells for each column in the PB, potentially enough for four parallel ring attractors. These cells are neurochemically distinct and could function independently (see Beetz et al., 2015).

– There is no known direct connection between the EB and the FB (proposed in Figure 4).

– There is no direct connection from the OL to the CX (indicated in the legend of Figure 1 as underlying PI).

– Subsection “Celestial current heading”: CL2 neurons should be CL1 (CL2 correspond to fly P-EN neurons, not E-PG)

– In the PI section of the Materials and methods, sometimes TN cells are referred to as TN2 cells or just as TN cells. TN2 is one of two types of TN cells (tangential noduli neurons) and was the one primarily used for the standard model of Stone et al., 2017. Please be consistent. Also, the tuning cells of the visual homing circuit are called TN cells. This is very confusing and should be changed.

5.2) There are no known ring attractors in the FB. The only ring attractor shown experimentally is the one in the EB/PB, which employs recurrent feedback loops with the PB (E-PG/P-EN/P-EG cells; equal to CL1a, CL2, and CL1b) and inhibitory neurons in the PB (TB1 or delta7 cells). While a similar recurrent connection pattern is thinkable in the FB as well, using unknown types of columnar cells, there is no experimental support for that. Pontine cells might also form local connections that could result in a RA, but that is even more speculative. Please clearly state that the numerous RAs required by the model are hypothetical and have not yet any biological correspondence in the form of identified cell types. Also, I suppose not all the neuron rings drawn in the figures are ring attractors. I suggest to make that distinction more clear (the many abbreviations for the different neuron rings do not make this easier to follow either).

5.3) The authors assume a second compass system in the PB that is fed directly from the OL via the posterior optical tract. There is no evidence for this beyond a single cell type from locusts that connects the accessory medulla (circadian clock) to the POTU, which is also innervated by TB1 neurons. However, there is no connection to the visual part of the OL, and no physiological data exists on the AME->POTU connection. In contrast, the anterior optic tract via the AOTU has been shown in Drosophila to contain many neurons that respond to visual features and they converge on the head direction cells in the EB via a recently resolved mechanism. It seems odd to ignore this known compass pathway and propose another one for which no evidence exists. That said, the authors use the anterior pathway to construct a desired heading via an ANN residing in the AOTU/BU pathway, information that is then used to feed into an EB ring attractor that then connects to additional attractors in the FB. Whereas the EB attractor (in conjunction with the PB) exists, there is no evidence for FB based ring attractors and there is no known direct connection between the EB and the FB. While this all results in a really nice figure, it unfortunately is misleading and based on not enough evidence to show it so prominently (readers might easily take it for factual).

It is useful to point out that there is an alternative solution for at least the compass problem: There are four individual CL1 cells in each column of the EB in locusts as well as in flies (EPG/PEG cells). While they are identical in their projection patterns, some connect the PB to the EB and others connect the EB to the PB, so that there are in theory enough cells to form two parallel recurrent loops (needed to maintain a head direction signal). One of them could be driven by landmarks, while the other could be driven by global compass cues. Whereas the current idea is that both inputs converge on a single head direction signal (celestial and local cue based), this might not be true, given that local cues have been tested in Drosophila and global cues in locusts and some other species. These neurons are neurochemically distinct and most likely play different functional roles.

Finally with respect to the desired heading, a short term plasticity based, associative mechanism linking the phase of the head direction signal and the local environment was recently demonstrated in Drosophila (Fisher at al., 2019 and Kim et al., 2019). The authors state that several of these phases can be stored and retrieved in each respective environment. This sounds very close to what the authors of the current study suggest for routes in ants. The authors should consider these points and revise the proposed circuit identity accordingly.

6) The overall layout of the model could be further clarified. The authors present many (nicely illustrated) parts of the model, but it is difficult to reconcile some of the partial models with one another and there is no immediate way of seeing how many neurons there are overall, or what their complete connectivity patterns might be. This may be obvious from the code itself, but behavioural biologists, neuroanatomists and physiologists need to be provided more direct intuition for the circuits. The absence of this information hinders independent interpretation and finding alternative solutions for mapping the model onto anatomical neural circuits once newly discovered neurons become available in the future. One possibility is to include (at least in the supplements) a full graphical depiction of the model with all existing neurons and their connections. Maybe using a force directed graph diagram like used by the authors of Stone et al., 2017 for their path integration model results in a model illustration that is intuitively understandable for researchers who think more in terms of anatomy. But even if it turns out to be somewhat messy, it would still be helpful.

7) The authors' could derive more constraints from the fly physiology literature than they do. As examples, Fisher et al., 2019 and Kim et al., 2019 have relevant findings relating to plasticity in mapping visual stimuli onto a compass representation. Turner-Evans et al., 2017 has a data-driven ring attractor model that is relevant, and Turner-Evans, 2019 features data demonstrating that the fly compass for current heading relies on visual input from the anterior optic tubercle, contrary to the authors' assumption deriving from an anatomical pathway from the posterior optic tubercle to the protocerebral bridge (175-176). On a somewhat related note, the fly heading system does not necessarily show 'bar following' in open loop: the experiments cited (Seelig and Jayaraman, 2015) were performed in closed loop, with the animal controlling bar position.

8) The authors should also release and include an properly commented code that they used for modelling in the final submission.

9) Why is the velocity of the simulated ant (Vo = 1cm/s) so much slower than that of the real one (about 50cm/s)? This point must be discussed. Is there any fundamental reason?

10) What would happen to the simulated ant if an obstacle was placed on the familiar route ? What is the robustness of the Zernike-based moment algorithm to the unpredicted presence of an obstacle that could appear during the homing ? Additional simulations to address this issue could show the robustness of the proposed navigation model. These new simulations could be in line with the well-known experiments proposed by Wehner and Wehner (R. Wehner and S. Wehner, “Insect navigation : use of maps or Ariadne's thread?”).

11) Subsection “ANN network and Route Following”: would it be possible to plot Crf with respect to angular orientation of the simulated ant in various place (every 10° steps for example).

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Thank you for submitting your article "A Decentralised Neural Model Explaining Optimal Integration of Navigational Strategies in Insects" for consideration by eLife. Your article has been reviewed by two peer reviewers, and the evaluation has been overseen by Mani Ramaswami as Reviewing Editor and Michael Eisen as the Senior Editor. The following individuals involved in review of your submission have agreed to reveal their identity: Stanley Heinze (Reviewer #2).

The reviewers have discussed the reviews with one another, expressed enthusiasm and appreciation for the revisions you made to the original submission, but still have some suggestions you should consider for further improvement of this article. The Reviewing Editor has drafted this decision to help you prepare a revised submission.

We would like to draw your attention to changes in our revision policy that we have made in response to COVID-19 (https://elifesciences.org/articles/57162). Specifically, we are asking editors to accept without delay manuscripts, like yours, that they judge can stand as eLife papers without additional data, even if they feel that they would make the manuscript stronger. Thus the revisions requested below only address clarity and presentation.

Summary:

This is an original, focussed and timely study on a topic of considerable interest: computational strategies used by insects to perform their remarkable navigational feats. The authors identify shortcomings in existing models -specifically, that they do not account for the entire range of capabilities and the flexibility that the most accomplished of insect navigators display such as visual homing, i.e., the ability of the ant to return to familiar region from novel locations. They then integrate and build upon prior models to successfully fill these gaps. The integrated model is particularly valuable because it pins specific computational functions on specific anatomical structures, most notably the central complex and the mushroom body. It is an important addition to both the literature on the insect central complex, as well as to more theoretical navigational work, in particular as many predictions can be made based on the presented models making it, in principle, testable in the near-medium term. The figures are well made and the writing is compact, the revisions are extensively, carefully done and, after minor edits, the paper should be ready for publication.

Larger points for consideration (Optional revisions at the authors’ discretion):

1) For increased accessibility and readability, please consider doing a little more with the earliest schematics to properly orient the broader readership. As an example, Figure 2A is fairly complicated for a reader who isn't familiar with the insect brain, and the panels at right will not be easy for most people to digest without Stone et al., 2017, open nearby (although the Stone et al. study is a must-read for anyone interested in insect navigation, this may not be the ideal way to get people to read the paper!). Could 'Shifted I-TB1' be unpacked a little by showing how the anatomy of the PB-FB columnar neurons might naturally facilitate the shift (as highlighted in the Stone et al. paper)-perhaps this could be a little breakout box at right. Note that the TB neurons should, in any case, be shown in the PB not the FB.

2).Similarly, although the vector subtraction plots below are helpful to the informed reader, it is not clear that they would be sufficiently explanatory to a newer reader. Again, it is the authors' decision whether or not to do more here.

3) A bigger, conceptual point: it is not obvious that one needs as many additional near-independent ring attractors as are invoked in this model, leaving aside the issue that they seem unlikely from an anatomical perspective. It is not clear or convincing that multiple ring attractors are needed to implement the authors' ideas. This potential opportunity for parsimony deserves some exploration, but the authors can decide whether that's something they want to do as part of this paper or not. The one thing that would be good is to make clear where the additional ring attractors reside in the authors' model: if they are speculatively placed in the FB, that should be made clearer in the early schematics (and also made clear that it is speculation at this stage).

https://doi.org/10.7554/eLife.54026.sa1

Author response

Essential revisions:

1) Accessibility to a broad readership. While the general text is written very well and the content is highly interesting for a life science (in particular insect neuroscience) audience, the Materials and methods section and some aspects of the reasoning behind the model are very technical. Even insect neurobiologists among the reviewers struggled to follow large parts of the Materials and methods and had never heard of Zernike moments for instance. The text should be revised to include some more intuitive and broadly accessible language that would allow a biologist to grasp at least the key principles of what is done by those initial analyses of the visual information in the model. A schematic illustration as to what Zernike moments are, maybe combined with some simple examples might help a lot. This is important as the paper is not only directed towards computational biologists, but is highly relevant also for physiologists, anatomists and behavioralists, most of whom would probably fail to grasp the essence of the new principles presented.

We have added a smoother introduction to results arising from frequency encoded views: specifically in the sections titled Visual Homing and Route Following. In addition, we have added a completely new section to the Materials and methods titled "Frequency Encoding Conceptual Overview" which provides an intuitive description of frequency encoding that prefaces the mathematical description of the Zernike Moment method. This section is now accompanied by an updated Figure 7 which includes both a cartoon depicting frequency encoding followed by the real Zernike Moments encoded of skylines from the simulated world. Finally, we have added a paragraph to the Discussion which looks at how the model could be improved which includes a discussion of more biologically plausible methods for frequency encoding. We believe that this part of our contribution should be much clearer to the non-expert reader now.

In similar vein, the authors should ask a mammalian researcher to read the article and provide them with feedback on how accessible they found it.

We have discussed the presentation of our data to other researchers both with experience in insect and mammalian navigation both personally and via presenting (e.g. student presentation at the recent NeuroMatch conference). These discussions have helped us immensely revise our presentation of challenging concepts – see new introduction to frequency encoding (section "Frequency Encoding Conceptual Overview") as an example.

Simple terminology/concepts/structure names in the Abstract/Introduction should not be used until they have been introduced properly, e.g., 'route following', 'visual homing', 'anterior optic tubercle'.

The Abstract has been updated with definitions of route-following and visual homing and we have made it clearer that we map these behaviours to specific neural circuits in the insect brains e.g. Mushroom Bodies and Anterior Optic Tubercle. Upon re-reading the text we believe that this issue largely arose from pointing readers to Figure 1 before definitions were made in the Results. Hence, we have updated Figure 1 and Figure 1’s legend to provide a more intuitive introduction to our model and associated brain areas with more clearer labelling and definitions. Similar edits have been made in the Introduction which we believe when taken together addresses this issue.

2) On a similar note, the article builds on a lot of prior modeling literature in the insect navigation field, particularly the work from Barbara Webb and colleagues. Important concepts/algorithmic strategies need to be more fully explained here (with appropriate citations) rather than just being referred to in the prior literature. The Materials and methods section does a good job of this, but the Results section could benefit from more explanation to guide unfamiliar readers.

As suggested we have added a detailed description of the functioning of the steering circuit as outlined by Stone et al., 2017, where it is first used (see section titled "Mushroom Bodies as Drivers of Rotational Invariant Visual Homing"). This is accompanied by an additional panel in Figure 2 depicting the steering circuit function in vector format. We have also described a more detailed explanation of the neurophysiological and functional advances made in Stone et al., 2017, regarding the function of PI (see section titled: "Optimally Integrating Visual Homing and Path Integration").

3) It is entirely reasonable that the authors combine experimental and modeling work from a range of different insect species to build different pieces of their own model. By and large they are careful to state which is which. However, they could make it clearer which assumptions are based on experimental data and which are based on prior models (i.e., not actual data). As an example, although the mushroom body has been suggested by numerous modeling studies and conceptually driven reviews to be involved in visual navigation, the experimental evidence for this is lacking, and their precise role is far from well-established.

We have added clarification to the text describing our MB model which we believe is the only section based on only prior models. Further we have added clarification of what neural paths are known and those that we speculate through the use of dashed (speculated) and solid (known) connections throughout our figures. In addition, we have added Table 2 which details the neurophysiological studies on which we base our models making it clear which elements are biologically known and those that are hypothesised.

4) It is excellent that the authors integrate useful components from prior models to construct their integrated model. Although the figures go some way towards clarifying how the different pieces might fit together, it would be useful to make even clearer what is entirely novel here and what is derived/integrated from previous work.

As part of our update to all figures we introduced a star labelling of circuits to indicate which elements were derived from previous works, which were completely novel, and those that are a mixture e.g. previous circuit but used in a novel way, adapted, integrated with other systems. See Figure 2 for its first usage.

In addition, although the authors make a testable case for the involvement of the fan-shaped body in a series of different navigational computations, controlled by the mushroom body, the figures are still somewhat complex and confusing. These should be clarified for the broader readership.

Each of the main figures and their legends have been revised and we now believe that they should be much clearer now. In addition, the new added Figure 6 should be helpful to show the neural connections in fan-shape body.

5) Neuroanatomical correspondence of model details: The paper claims that the model is in most parts biologically constrained and that most elements can be mapped onto known neurons. Where this was not possible (route following) the authors speculated about the possible implementations. While on the levels of neuropil groups this is all quite true, the details, especially in the central complex, are less clear and many of the proposed circuits have no known counterpart in any insect brain to date. This is not saying that those parts of the model are not realistic or interesting, but that the claim that they correspond to existing neurons in the central complex, is slightly misleading. Below series of obvious mix ups of cell types below, which need to be corrected (5.1), but additionally, it should be clearly stated where the model does not (yet) have a solid grounding in biology (see point 5.2). Finally, the speculative route following implementation seems at odds with neurophysiological data from various species and alternative pathways and implementations seem more likely (point 5.3).

This feedbacks is very helpful, thanks very much. See below the changes made accordingly.

5.1) Subsection “Mushroom Bodies As Drivers of Rotational Invariant Visual Homing”: CPU3 neurons are supposed to be a mirrored TB1 ring attractor network? Is this really what the authors want to say? CPU3 neurons are known in locusts (Heinze and Homberg, 2008), but connect the PB with the FB as columnar cells. If the authors mean CPU4 cells, these neurons are also not forming a ring-network (even though they could receive shifted compass information from TB1 cells by some means). Most simply, would not a parallel set of TB1 cells be optimally suited for this task? There are four TB1 cells for each column in the PB, potentially enough for four parallel ring attractors. These cells are neurochemically distinct and could function independently (see Beetz et al., 2015).

Thanks for the feedback. We have changed the text (subsection “Mushroom bodies as drivers of rotational invariant visual homing”, fourth paragraph). See also the response to point 5.3.

– There is no known direct connection between the EB and the FB (proposed in Figure 4)

We have amended both the text and figure to show that this connection is included in our hypothesised pathways but also cite Hanesch et al., 1989, who show evidence for such a connection.

– There is no direct connection from the OL to the CX (indicated in the legend of Figure 1 as underlying PI).

We have amended the model pathway accordingly to 'OL->AUTO->LAL->CX'.

– Subsection “Celestial current heading”: CL2 neurons should be CL1 (CL2 correspond to fly P-EN neurons, not E-PG)

Changed labels to 'CL1' as suggested

– In the PI section of the Materials and methods, sometimes TN cells are referred to as TN2 cells or just as TN cells. TN2 is one of two types of TN cells (tangential noduli neurons) and was the one primarily used for the standard model of Stone et al., 2017. Please be consistent. Also, the tuning cells of the visual homing circuit are called TN cells. This is very confusing and should be changed.

We have now changed all TN to be TN2.

5.2) There are no known ring attractors in the FB. The only ring attractor shown experimentally is the one in the EB/PB, which employs recurrent feedback loops with the PB (E-PG/P-EN/P-EG cells; equal to CL1a, CL2, and CL1b) and inhibitory neurons in the PB (TB1 or delta7 cells). While a similar recurrent connection pattern is thinkable in the FB as well, using unknown types of columnar cells, there is no experimental support for that. Pontine cells might also form local connections that could result in a RA, but that is even more speculative. Please clearly state that the numerous RAs required by the model are hypothetical and have not yet any biological correspondence in the form of identified cell types. Also, I suppose not all the neuron rings drawn in the figures are ring attractors. I suggest to make that distinction more clear (the many abbreviations for the different neuron rings do not make this easier to follow either).

Thanks for this feedback. We only propose one new ring attractor in our model which is used to combine PI and VH signals, but on re-reading our text we see that this was not clear. We have added a sentence to the manuscript where you suggested to clarify the difference between ring networks and ring attractor networks. In addition, we have added labels to the figures to clearly indicate where ring attractors are used. Finally, the new Table 2 also provides of description of each circuit element e.g. ring network vs. ring attractor network, and also their biological supports.

5.3) The authors assume a second compass system in the PB that is fed directly from the OL via the posterior optical tract. There is no evidence for this beyond a single cell type from locusts that connects the accessory medulla (circadian clock) to the POTU, which is also innervated by TB1 neurons. However, there is no connection to the visual part of the OL, and no physiological data exists on the AME->POTU connection. In contrast, the anterior optic tract via the AOTU has been shown in Drosophila to contain many neurons that respond to visual features and they converge on the head direction cells in the EB via a recently resolved mechanism. It seems odd to ignore this known compass pathway and propose another one for which no evidence exists. That said, the authors use the anterior pathway to construct a desired heading via an ANN residing in the AOTU/BU pathway, information that is then used to feed into an EB ring attractor that then connects to additional attractors in the FB. Whereas the EB attractor (in conjunction with the PB) exists, there is no evidence for FB based ring attractors and there is no known direct connection between the EB and the FB. While this all results in a really nice figure, it unfortunately is misleading and based on not enough evidence to show it so prominently (readers might easily take it for factual).

It is useful to point out that there is an alternative solution for at least the compass problem: There are four individual CL1 cells in each column of the EB in locusts as well as in flies (EPG/PEG cells). While they are identical in their projection patterns, some connect the PB to the EB and others connect the EB to the PB, so that there are in theory enough cells to form two parallel recurrent loops (needed to maintain a head direction signal). One of them could be driven by landmarks, while the other could be driven by global compass cues. Whereas the current idea is that both inputs converge on a single head direction signal (celestial and local cue based), this might not be true, given that local cues have been tested in Drosophila and global cues in locusts and some other species. These neurons are neurochemically distinct and most likely play different functional roles.

This is very interesting and helpful price of feedback. Thank you. We have amended our model to generate the terrestrial heading (local compass) pathway to OL->AOTU->BULB->EB->PB to be consistent with the neural data. We have updated our model accordingly which can be seen in Figure 1C, Figure 4A also Figure 6A. Specifically there are four individual TB1/Δ7 cells in each column, which can be used to represent different current-headings. We apply one pathway to generate the Global Compass (celestial heading for VH and PI, I-TB1 neurons), another to generate Local Compass (terrestrial heading for RF, II-TB1 neurons). We have also update the text in the description of the models and in the Discussion.

Finally with respect to the desired heading, a short term plasticity based, associative mechanism linking the phase of the head direction signal and the local environment was recently demonstrated in Drosophila (Fisher et al., 2019 and Kim et al., 2019). The authors state that several of these phases can be stored and retrieved in each respective environment. This sounds very close to what the authors of the current study suggest for routes in ants. The authors should consider these points and revise the proposed circuit identity accordingly.

Thank you for this feedback. We have now added these references to our discussion of the possible biological evidence for, and biological pathways for this type of local compass information (see section "Route Following in the Insect Brain"), and adapted our model accordingly. These works have helped us formulate discussions of insects possessing multiple compass systems and the insects’ ability to correlate the views with the orientations, which was very helpful. Finally, we have added some text to the Discussion inspired by these works regarding identifying which groups of neurons in the CX might be encoding for current vs. desired headings.

6) The overall layout of the model could be further clarified. The authors present many (nicely illustrated) parts of the model, but it is difficult to reconcile some of the partial models with one another and there is no immediate way of seeing how many neurons there are overall, or what their complete connectivity patterns might be. This may be obvious from the code itself, but behavioural biologists, neuroanatomists and physiologists need to be provided more direct intuition for the circuits. The absence of this information hinders independent interpretation and finding alternative solutions for mapping the model onto anatomical neural circuits once newly discovered neurons become available in the future. One possibility is to include (at least in the supplements) a full graphical depiction of the model with all existing neurons and their connections. Maybe using a force directed graph diagram like used by the authors of Stone et al., 2017, for their path integration model results in a model illustration that is intuitively understandable for researchers who think more in terms of anatomy. But even if it turns out to be somewhat messy, it would still be helpful.

That’s a really good suggestion. Thanks. We have added both a new force-directed-graph (Figure 6, Materials and methods section) and a table (Table 2, Materials and methods section) that clarifies the type and function of all neurons and their connection that comprise the model.

7) The authors' could derive more constraints from the fly physiology literature than they do. As examples, Fisher et al., 2019 and Kim et al., 2019 have relevant findings relating to plasticity in mapping visual stimuli onto a compass representation. Turner-Evans et al., 2017 has a data-driven ring attractor model that is relevant, and Turner-Evans, 2019 features data demonstrating that the fly compass for current heading relies on visual input from the anterior optic tubercle, contrary to the authors' assumption deriving from an anatomical pathway from the posterior optic tubercle to the protocerebral bridge (175-176). On a somewhat related note, the fly heading system does not necessarily show 'bar following' in open loop: the experiments cited (Seelig and Jayaraman, 2015) were performed in closed loop, with the animal controlling bar position.

Thank you for this feedback. We have changed the neural pathway to generate the terrestrial heading (local compass) as suggested. See response to point 5.3.

8) The authors should also release and include an properly commented code that they used for modelling in the final submission.

Additional comments of the source code, the simulation implementations and a GUI have been uploaded through the full submission and also upload to Github repository: https://github.com/XuelongSun/InsectNavigationToolkitModelling.

9) Why is the velocity of the simulated ant (Vo = 1cm/s) so much slower than that of the real one (about 50cm/s)? This point must be discussed. Is there any fundamental reason?

Apologies there was a typo in our description of the model velocity that stated 1cm/s, but this should be 1cm/step. There is no link between the speed in the simulation and computation. Rather than simulated ant moves in 1cm steps, computes the direction move, takes a step and repeats. We have now corrected this in the text (Materials and methods section).

10) What would happen to the simulated ant if an obstacle was placed on the familiar route ? What is the robustness of the Zernike-based moment algorithm to the unpredicted presence of an obstacle that could appear during the homing ? Additional simulations to address this issue could show the robustness of the proposed navigation model. These new simulations could be in line with the well-known experiments proposed by Wehner and Wehner (R. Wehner and S. Wehner, “ Insect navigation : use of maps or Ariadne's thread?”).

This is a lovely idea and we have started to simulate such scenarios but believe that this is beyond the initial proof of concept nature of this study and better suited to a comparative study between this model and others across environments and with an investigation of parameters which will likely have a large effect on the performance. We have added a sentence to the Discussion raising exactly this idea as a logical next step in evaluating the model.

11) Subsection “ANN network and Route Following”: would it be possible to plot Crf with respect to angular orientation of the simulated ant in various place (every 10° steps for example).

We investigated plotting the data using crf as the reviewer suggested but as the preferred orientation changes with location this was very hard to visualise across locations. We found that this data was more intuitively presented using the quiver plots in the background of Figure 4B.

[Editors' note: further revisions were suggested prior to acceptance, as described below.]

Larger points for consideration (Optional revisions at the authors’ discretion):

1) For increased accessibility and readability, please consider doing a little more with the earliest schematics to properly orient the broader readership. As an example, Figure 2A is fairly complicated for a reader who isn't familiar with the insect brain, and the panels at right will not be easy for most people to digest without Stone et al., 2017, open nearby (although the Stone et al. study is a must-read for anyone interested in insect navigation, this may not be the ideal way to get people to read the paper!). Could 'Shifted I-TB1' be unpacked a little by showing how the anatomy of the PB-FB columnar neurons might naturally facilitate the shift (as highlighted in the Stone et al. paper)-perhaps this could be a little breakout box at right. Note that the TB neurons should, in any case, be shown in the PB not the FB.

Thank you for the feedback. We have now revised Figure 2 to try and address the key issues. Specifically, we have replaced previous vector diagram and VH schematic with a combined schematic that we hope makes it easier to conceptually understand (a) how the steering circuit functions (b) how VH functions through a simple shifted heading input to the steering circuit. On reflection we think that if the readers understand these points then much of what follows should follow, and we think that these schematics should allow that.

Also, as requested we have relabelled what we previously called "shifted ITB1" neurons to VH as we can only speculate at this stage which neurons would store this signal.

Regarding adding detail to the shifting circuit. We considered this feedback and agree that it deserves addressing but we prefer to add a discussion of possible shifting mechanism to the text where we have space for some details and to add speculation.

2) Similarly, although the vector subtraction plots below are helpful to the informed reader, it is not clear that they would be sufficiently explanatory to a newer reader. Again, it is the authors' decision whether or not to do more here.

Finally, we have removed the vector plot entirely now as it has been superseded by the new Figure 2B and C.

3) A bigger, conceptual point: it is not obvious that one needs as many additional near-independent ring attractors as are invoked in this model, leaving aside the issue that they seem unlikely from an anatomical perspective. It is not clear or convincing that multiple ring attractors are needed to implement the authors' ideas. This potential opportunity for parsimony deserves some exploration, but the authors can decide whether that's something they want to do as part of this paper or not. The one thing that would be good is to make clear where the additional ring attractors reside in the authors' model: if they are speculatively placed in the FB, that should be made clearer in the early schematics (and also made clear that it is speculation at this stage).

We have added two paragraphs to the Discussion to address these points directly. Specifically, we clarify the benefits of the proposed RAs which may make them more parsimonious than alternate optimal integration networks. Also, we have clarified how many RAs we propose which we feel was confusing in previous edits. Finally, we add a call for future analysis of biological realism indicating that simpler mechanisms cannot be ruled out. We have included a label to the left of Figure 3A that indicates that we propose that the new integrating ring attractors reside in the FB. We have also added this clarification to the figure legend, and in the general discussion in the main text.

https://doi.org/10.7554/eLife.54026.sa2

Article and author information

Author details

  1. Xuelong Sun

    Computational Intelligence Lab & L-CAS, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
    Contribution
    Conceptualization, Data curation, Software, Formal analysis, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    For correspondence
    xsun@lincoln.ac.uk
    Competing interests
    No competing interests declared
    ORCID icon "This ORCID iD identifies the author of this article:" 0000-0001-9035-5523
  2. Shigang Yue

    1. Computational Intelligence Lab & L-CAS, School of Computer Science, University of Lincoln, Lincoln, United Kingdom
    2. Machine Life and Intelligence Research Centre, Guangzhou University, Guangzhou, China
    Contribution
    Supervision, Funding acquisition, Project administration, Writing - review and editing
    For correspondence
    syue@lincoln.ac.uk
    Competing interests
    No competing interests declared
  3. Michael Mangan

    Sheffield Robotics, Department of Computer Science, University of Sheffield, Sheffield, United Kingdom
    Contribution
    Conceptualization, Resources, Data curation, Supervision, Validation, Investigation, Visualization, Methodology, Writing - original draft, Writing - review and editing
    For correspondence
    m.mangan@sheffield.ac.uk
    Competing interests
    No competing interests declared

Funding

Horizon 2020 Framework Programme (ULTRACEPT 778062)

  • Xuelong Sun
  • Shigang Yue

Horizon 2020 Framework Programme (STEP2DYNA 691154)

  • Xuelong Sun
  • Shigang Yue

The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.

Acknowledgements

This research has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 778062, ULTRACEPT and No 691154, STEP2DYNA.

Thanks to Barbara Webb and Insects Robotics Group at the Univ of Edinburgh, Hadi Maboudi, Alex Cope and Andrew Philippedes for comments on early drafts, and to Antoine Wystrach for provision of data from previous works. Thanks for proof readers Anne and Mike Mangan (Snr). Finally, thanks to our editor and reviewers who helped improve the model and manuscript through their excellent feedback.

Senior Editor

  1. Michael B Eisen, University of California, Berkeley, United States

Reviewing Editor

  1. Mani Ramaswami, Trinity College Dublin, Ireland

Reviewer

  1. Stanley Heinze, Lund University, Sweden

Publication history

  1. Received: November 28, 2019
  2. Accepted: June 26, 2020
  3. Accepted Manuscript published: June 26, 2020 (version 1)
  4. Version of Record published: July 16, 2020 (version 2)

Copyright

© 2020, Sun et al.

This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited.

Metrics

  • 782
    Page views
  • 129
    Downloads
  • 2
    Citations

Article citation count generated by polling the highest count across the following sources: Crossref, PubMed Central, Scopus.

Download links

A two-part list of links to download the article, or parts of the article, in various formats.

Downloads (link to download the article as PDF)

Download citations (links to download the citations from this article in formats compatible with various reference manager tools)

Open citations (links to open the citations from this article in various online reference manager services)

  1. Further reading

Further reading

    1. Computational and Systems Biology
    2. Neuroscience
    Chen Chen et al.
    Research Article

    While animals track or search for targets, sensory organs make small unexplained movements on top of the primary task-related motions. While multiple theories for these movements exist—in that they support infotaxis, gain adaptation, spectral whitening, and high-pass filtering—predicted trajectories show poor fit to measured trajectories. We propose a new theory for these movements called energy-constrained proportional betting, where the probability of moving to a location is proportional to an expectation of how informative it will be balanced against the movement’s predicted energetic cost. Trajectories generated in this way show good agreement with measured trajectories of fish tracking an object using electrosense, a mammal and an insect localizing an odor source, and a moth tracking a flower using vision. Our theory unifies the metabolic cost of motion with information theory. It predicts sense organ movements in animals and can prescribe sensor motion for robots to enhance performance.

    1. Computational and Systems Biology
    Ran Liu et al.
    Research Article

    Sepsis is not a monolithic disease, but a loose collection of symptoms with diverse outcomes. Thus, stratification and subtyping of sepsis patients is of great importance. We examine the temporal evolution of patient state using our previously-published method for computing risk of transition from sepsis into septic shock. Risk trajectories diverge into four clusters following early prediction of septic shock, stratifying by outcome: the highest-risk and lowest-risk groups have a 76.5% and 10.4% prevalence of septic shock, and 43% and 18% mortality, respectively. These clusters differ also in treatments received and median time to shock onset. Analyses reveal the existence of a rapid (30–60 min) transition in risk at the time of threshold crossing. We hypothesize that this transition occurs as a result of the failure of compensatory biological systems to cope with infection, resulting in a bifurcation of low to high risk. Such a collapse, we believe, represents the true onset of septic shock. Thus, this rapid elevation in risk represents a potential new data-driven definition of septic shock.