Learning a stimulus-response strategy for turbulent navigation.

(A) Representation of the search problem with turbulent odor cues obtained from Direct Numerical Simulations of fluid turbulence (grey scale, odor snapshot from the simulations). The discrete position s is hidden;the odor concentration zT = z(s(t), t)|t − T ≤ t ≤ t is observed along the trajectory s(t), where T is the sensing memory. (B) Odor traces from direct numerical simulations at different (fixed) points within the plume. Odor is noisy and sparse, information about the source is hidden in the temporal dynamics. (C) Contour maps of olfactory states with nearly infinite memory (T = 2598): on average olfactory states map to different locations within the plume and the void state is outside the plume. Intermittency is discretized in three bins defined by two thresholds 66% (red line) and 33% (blue line). Intensity is discretized in 5 bins (dark red shade to white shade) defined by four thresholds (percentiles 99%, 80%, 50%, 25%). (D) Performance of stimulus-response strategies obtained during training, averaged over 500 episodes. We train using realistic turbulent data with memory T = 20 and backtracking recovery.

The optimal memory T*.

(A) Four measures of performance as a function of memory with backtracking recovery (solid line) show that the optimal memory T* = 20 maximizes average performance and minimizes standard deviation, except for the normalized time. Top: Averages computed over 10 realizations of test trajectories starting from 43000 initial positions (dash: results with adaptive memory). Bottom: standard deviation of the mean performance metrics for each initial condition (see Materials and Methods). (B) Average number of times agents encounter the void state along their path, 〈N〉, as a function of memory (top);cumulative average reward 〈G〉 is inversely correlated to 〈N〉 (bottom), hence the optimal memory minimizes encounters with the void. (C) Colormaps: Probability that agents at different spatial locations are in the void state at any point in time, starting the search form anywhere in the plume and representative trajectory of a successful searcher (green solid line) with memory T =1, T = 20, T = 50 (left to right). At the optimal memory agents in the void state are concentrated near the edge of the plume. Agents with shorter memories encounter voids throughout the plume;agents with longer memories encounter more voids outside of the plume as they delay recovery. In all panels, shades are ± standard deviation.

The role of temporal memory with Brownian recovery strategy (same as main Figure 2A).

(A)Total cumulative reward (top left) and standard deviation (top right) as a function of memory showing an optimal memory T* = 3 for the Brownian agent. Other measures of performance with their standard deviations show the same optimal memory (bottom). The trade-off between long and short memories discussed in the main text holds, but here exiting the plume is much more detrimental because regaining position within the plume by Brownian motion is much lengthier. (B) As for the Backtracking agent, the optimal memory corresponds minimizes the number of times the agent encounters the void state.

The adaptive memory approximates the duration of the blank dictated by physics and it is an efficient heuristics, especially when coupled with a learned recovery strategy.

(A) Top to bottom: Colormaps of the Eulerian average blank time τb; average sensing memory T; standard deviation of Eulerian blank time and of sensing memory. The sensing memory statistics is computed over all agents that are located at each discrete cell, at any point in time. (B) Probability distribution of τb across all spatial locations and times (black) and of T across all agents at all times (gray). (C) Performance with the adaptive memory nears performance of the optimal fixed memory, here shown for backtracking; similar results apply to the Brownian recovery (Figure 3-figure supplement 2). (D) Comparison of five recovery strategies with adaptive memory: The learned recovery with adaptive memory outperforms all fixed and adaptive memory agents. In (C) and (D) dark squares mark the mean, and light rectangles mark ± standard deviation. f+ is defined as the fraction of agents that reach the target at test, hence has no standard deviation.

All four measures of performance across agents with fixed memory and Backtracking vs Brownian recovery (green and red respectively, unframed boxes) and with adaptive memory for Backtracking, Brownian and Learned recovery (green, red and blue respectively, framed boxes)

Optimal policies with adaptive memory for different recovery strategies: backtracking (green), Brownian (red) and learned (blue).

For each recovery, we show the spatial distribution of the olfactory states (top); the policy (center) and the state occupancy (bottom) for non-void states (left) us the void state π*(a|∅)(right). Spatial distribution: probability that an agent at a given position is in any non-void olfactory state (left) or in the void state (right), color-coded from yellow to blue. Policy: actions learned in the non-void states ∑o≠∅ n0π*(a|0), weighted on their occupancy n0 (left, arrows proportional to the frequency of the corresponding action) and schematic view of recovery policy in the void state (right). State occupancy: fraction of agents that is in any of the 15 non-void states (left) or in the void state (right) at any point in space and time. Occupancy is proportional to the radius of the corresponding circle. The position of the circle identifies the olfactory state (rows and columns indicate the discrete intensity and intermittency respectively). All statistics is computed over 43000 trajectories, starting from any location within the plume.

Optimal policies for different recovery strategies and adaptive memory. From left to right: results for backtracking (green), Brownian (red) and learned (blue) recovery strategies. Top: probability that an agent in a given olfactory state is at a specific spatial location color-coded from yellow to blue. Rows and columns indicate the olfactory state;the void state is in the lower right corner. Arrows indicate the optimal action from that state. Bottom: Circles represent occupancy of each state, olfactory states are arranged as in the top panel. All statistics is computed over 43000 trajectories, starting from any location within the plume.

The learned recovery resembles the cast and surge observed in animals, with an initial surge of 5±2 steps and a subsequent motion crosswind starting from either side of the centerline and overshooting to the other side.

Parameters of the learned recovery, statistics over 20 independent trainings.

Generalization to statistically different environments.

(A) Snapshots of odor concentration normalized with concentration at the source, colorcoded from blue (0) to yellow (1) for environment 1 to 6 as labeled. Environment 1* is the native environment where all agents are trained. (B) Performance for the five recovery strategies brownian (red), backtracking (green), learned (blue), circling (orange) and zigzag (purple) with adaptive memory, trained on the native environment and tested across all environments 1 to 6. Four measures of performance defined in the main text are shown. Dark squares mark the mean, and empty rectangles ± standard deviation. No standard deviation is shown for the f+ measure for the learned, circling and zigzag recoveries as these strategies are deterministic (see Materials and Methods).

The learned recovery with adaptive memory and a single nonempty olfactory state (empty circles) displays degraded performance with respect to the full model (full circles).

Gridworld geometry. From Top: 2D size of the simulation, agents that leave the simulation box continue to receive negative reward and no odor; number of time stamps in the simulation, beyond which simulations are looped; number of actions per time stamp; speed of the agent; noise level below which odor is not detected; location of the source on the grid. See Table 3 for the values of the grid size Δx and time stamps at which odor snapshots are saved.

Parameters of the simulations. From Left to Right: Simulation ID (1,2, 3);Length L, width W, height H of the computational domain;mean horizontal speed Ub = 〈u〉; Kolmogorov length scale η = (v3/)1/4 where v is the kinematic viscosity and is the energy dissipation rate;mean size of gridcell Δx; Kolmogorov timescale τη = η2/v; energy dissipation rate = v/2〈(∂ui/∂xj + ∂uj/dxi)2〉; wall unit y+ = v/uτ where uτ is the friction velocity;bulk Reynolds number Reb = Ub(H/2)/v based on the bulk speed Ub and half height;magnitude of velocity fluctuations u relative to the bulk speed;large eddy turnover time T = H/2u; ωsave frequency at which odor snapshots are saved ×τη. For each simulation, first row reports results in non dimensional units. Second and third rows provide an idea of how non-dimensional parameters match dimensional parameters in real flows in air and water, assuming the Kolmogorov length is 1.5 mm in air and 0.4 mm in water.