A hierarchical model of goal directed navigation selects trajectories in a visual environment
Introduction
The ability to successfully navigate to a predefined location is often a life crucial task for many higher order organisms. The goal location might be a food source, a temporary shelter, a nest, or some other desired location. Squirrels are effective at rediscovering their previously stashed food sources (Jacobs & Liman, 1991). Rats can learn to revisit or to avoid known food locations (Brown, 2011, Olton and Schlosberg, 1978). Mice learn to avoid an unpleasant environment, such as a water-maze, by finding an out-of-sight escape platform after only a handful of learning trials (Morris et al., 1982, Redish and Touretzky, 1998, Steele and Morris, 1999). If a visible goal location is in the field-of-view of the agent, the navigation task becomes trivial: The agent proceeds towards the visible goal location avoiding potential obstacles on the way. However, if the goal location is out of visual range or hidden (as in the water-maze) then navigation mechanisms based on cognitive capabilities that can exploit the previously encoded and currently out of view goal location become important to guide the agent to the goal. Such a navigation mechanism would not necessarily need to pinpoint the goal location. It would be sufficient to guide the agent to the general goal location neighborhood such that the goal is in the visual range of the agent. Consequently, the visually driven navigation system can take over to home the agent into the goal location, an approach that has been used successfully by the robotic mapping system used in this research (Milford & Wyeth, 2009).
There is compelling evidence gathered from physiological and behavioral data suggesting the existence of spatial cognitive mechanisms in the brain representing the agent’s spatial environment and aiding it during goal-directed navigation experiments. The entorhinal cortex and hippocampus play a role in goal-directed behavior towards recently learned spatial locations in an environment. Rats show impairments in finding the spatial location of a hidden platform in the Morris water-maze after lesions of the hippocampus, postsubiculum, or entorhinal cortex (Morris et al., 1982, Steele and Morris, 1999, Steffenach et al., 2005; Taube, Kesslak, & Cotman, 1992). Recordings from several brain areas in behaving rats show neural spiking activity relevant to goal-directed spatial behavior, including grid cells in the entorhinal cortex that fire when the rat is in a repeating regular array of locations in the environment falling on the vertices of tightly packed equilateral triangles (Hafting, Fyhn, Molden, Moser, & Moser, 2005), place cells in the hippocampus that respond to mostly unique spatial locations (O’Keefe and Nadel, 1978), head direction cells in the postsubiculum that respond to narrow ranges of allocentric head direction (Taube, 2007), and cells that respond to translational speed of running (O’Keefe, Burgess, Donnett, Jeffery, & Maguire, 1998).
Some of the evidence related to the goal-directed navigation planning include forward sweeping events of spiking activity in rat place cell ensembles that have been observed during vicarious trial and error experiments (Johnson and Redish, 2007, Pfeiffer and Foster, 2013) and sharp wave ripple events during goal-directed spatial tasks (Davidson et al., 2009, Foster and Wilson, 2006, Jadhav et al., 2012, Louie and Wilson, 2001). Furthermore, brief sequences of place cell ensemble activity encoding trajectories from an agent’s current location have been observed to be strongly biased towards the agent’s predicted goal location (Pfeiffer & Foster, 2013).
In this work we combine two biologically inspired models that generate and maintain representations of their environment as collections of simulated spatially tuned neurons such as grid cells and place cells.
The first one of these models is the RatSLAM model (Milford, Wyeth, & Prasser, 2004) which has been implemented on real robotic agents and has been shown to match or outperform the state of the art probabilistic robotic systems in encoding and navigating large environments over long periods of time (Milford and Wyeth, 2009, Prasser et al., 2006). However, the current RatSLAM model is not easily scalable and its goal directed navigation module is less biologically plausible than its Simultaneous Localization and Mapping (SLAM) component.
The second model we use in our work is the HiLAM (Erdem & Hasselmo, 2013), a biologically inspired goal-directed navigation model based on look-ahead trajectories in a hierarchical collection of simulated grid cells and place cells. While HiLAM is highly capable in simulating behavioral goal-directed navigation experiments, it is prone to failure in the presence of noisy and degraded input, since it does not have mechanisms in place to detect and to correct for the stochastic loss of fidelity in its state representation. Consequently, like many other high fidelity computational models, the HiLAM has not been previously tested on real life data.
In this work we combine the RatSLAM model and the HiLAM such that their individual fortes complement each other in generating and maintaining stable spatial maps using real life visual data (RatSLAM) and in using the generated maps for goal-directed path planning in a biologically plausible manner (HiLAM).
Section snippets
Material and methods
The framework presented in this work shows collaboration between two previously developed computational models for spatial mapping and navigation. While the RatSLAM model generates rectified odometry data, the Hierarchical Look-Ahead Trajectory Model (HiLAM) provides a mechanism for goal directed navigation. We also show the scalability of the HiLAM using odometry data extracted from noisy real-life visual information collected from a small remote controlled vehicle referred to as the “agent”.
Experimental procedure
Experiments were performed in two distinct environments, a small square rat arena and an outdoor area more than two orders of magnitude larger. The larger area enabled us to test the scalability of the HiLAM.
Results
In this section we present results from the vision-based self-motion estimation and place recognition processes, map formation and navigation probes in the two environments. The main interaction between the two processes of the hybrid model is as follows. The RatSLAM process computes agent’s self-motion estimates in terms of odometry data based on visual cues. The odometry data then are input to the HiLAM process in the form of the corrected internal representation of the velocity vectors at
Discussion
State-of-the-art goal-directed robotic navigation systems perform extremely well for limited durations and within relatively static environments. Higher level living organisms however appear not to suffer from the degrading effects of persistent navigation for extended periods of time and in dynamic environments. The technical challenge is bridging the spatial representation that autonomous systems use and the spatial representation created by grid cells in the entorhinal cortex and place cells
Acknowledgments
This work was supported by the ONR MURI Grant N00014-10-1-0936, the ONR Grant N00014-09-1-0641 and an Australian Research Council Discovery Project DP1212775.
References (43)
- et al.
Hippocampal replay of extended experience
Neuron
(2009) - et al.
Grey squirrels remember the locations of buried nuts
Animal Behaviour
(1991) - et al.
Temporally structured replay of awake hippocampal ensemble activity during rapid eye movement sleep
Neuron
(2001) - et al.
Spatial memory in the rat requires the dorsolateral band of the entorhinal cortex
Neuron
(2005) - et al.
Lesions of the rat postsubiculum impair performance on spatial tasks
Behavioral and Neural Biology
(1992) - et al.
OpenRatSLAM: An open source brain-based SLAM system
Autonomous Robots
(2013) - et al.
Learning in a geometric model of place cell firing
Hippocampus
(2007) - et al.
Conversion of a phase- to a rate-coded position signal by a three-stage model of theta cells, grid cells, and place cells
Hippocampus
(2008) - et al.
Experience-dependent modifications of hippocampal place cell firing
Hippocampus
(1991) Social influences on rat spatial choice
Comparative Cognition and Behavior Reviews
(2011)
Grid cells and theta as oscillatory interference: Theory and predictions
Hippocampus
An oscillatory interference model of grid cell firing
Hippocampus
A goal-directed spatial navigation model using forward trajectory planning based on grid cells
The European Journal of Neuroscience
A biologically inspired hierarchical goal directed navigation model
Journal of Physiology, Paris
Reverse replay of behavioural sequences in hippocampal place cells during the awake state
Nature
Microstructure of a spatial map in the entorhinal cortex
Nature
Grid cell mechanisms and function: Contributions of entorhinal persistent spiking and phase resetting
Hippocampus
Awake hippocampal sharp-wave ripples support spatial memory
Science (New York, N.Y.)
Place cells, grid cells, attractors, and remapping
Neural Plasticity
Neural ensembles in CA3 transiently encode paths forward of the animal at a decision point
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
Robust conjunctive item-place coding by hippocampal neurons parallels learning what happens where
The Journal of Neuroscience: The Official Journal of the Society for Neuroscience
Cited by (31)
A model for navigation in unknown environments based on a reservoir of hippocampal sequences
2020, Neural NetworksCitation Excerpt :The position of a rodent in a known environment can be decoded from the population activity of hippocampal place cells, which are active at only a few locations of the environment and silent otherwise (Davidson, Kloosterman, & Wilson, 2009; O’Keefe & Dostrovsky, 1971; Skaggs, McNaughton, Gothard, & Markus, 1993). Much work has been dedicated to successfully unravel neural circuit mechanisms underlying this representation of space (e.g. Cutsuridis & Hasselmo, 2012; Erdem, Milford, & Hasselmo, 2015; Samsonovich & McNaughton, 1997; Saravanan et al., 2015; Stachenfeld, Botvinick, & Gershman, 2017; Touretzky & Redish, 1996), however, there is still ongoing debate particularly on the mechanisms (e.g. Foster, 2017; Liu, Sibille, & Dragoi, 2018b; Matheus Gauy et al., 2018; Nicola & Clopath, 2019) and the functional role of temporal activity patterns (Liu, Sibille, & Dragoi, 2018a). In contrast to technical systems, where autonomous localization methods are based on a costly collection of extensive series of sensory snapshots (Davison, Reid, Molton, & Stasse, 2007; Milford & Wyeth, 2012; Siam & Zhang, 2017), the prevalent idea in neuroscience is that, in mammals, the hippocampal space code arises from efficient internal dynamics that is locked to places of salient sensory features (Keinath, Epstein, & Balasubramanian, 2018) thereby saving the synaptic resources for explicitly memorizing all details of place-specific sensory inputs.
Grid Cells and Place Cells: An Integrated View of their Navigational and Memory Function
2015, Trends in NeurosciencesCitation Excerpt :This raises the question of how far ahead the system can mind-travel. Given the larger spatial scales of grid cells and place cells in the ventral as compared to the dorsal parts of the system [55,56], we envisage that the dorsal and ventral entorhinal/hippocampal regions work in parallel to perform mind-travel over small and large scales, respectively (see [57] for computational benefits of such a system). The above model raises the question of what mechanisms could provide the control signals outlined in Figure 3.
Bio-inspired homogeneous multi-scale place recognition
2015, Neural NetworksCitation Excerpt :This lack of perception components makes this model in its current form inappropriate to perform with imperfect sensing because accumulated errors from self-motion sensor will rapidly violate the internal spatial representation. To partially address these practical shortcomings, the model was recently integrated with RatSLAM (Erdem, Milford, & Hasselmo, 2015). Other models are designed more specifically for robotic applications and as such do incorporate some form of perception of the external environment, while using biologically-inspired concepts such as place cells (Giovannangeli, Gaussier, & Désilles, 2006; Milford et al., 2004).
If I had a million neurons: Potential tests of corticohippocampal theories
2015, Progress in Brain ResearchCitation Excerpt :Models show that grid cells (Raudies et al., 2012) and boundary cells (Raudies and Hasselmo, 2012) can be driven by visual odometry based on optic flow templates similar to responses observed in monkey area MT and MST. The location of a rat can also be computed on the basis of visual features in a manner related to bioinspired mechanisms of simultaneous localization and mapping used in robotic applications by researchers such as Milford (Erdem et al., 2014; Milford and Schultz, 2014; Milford and Wyeth, 2008; Milford et al., 2010). Many scientists have already concluded that oscillatory interference is not a valid model of grid cells based on data from bats that shows grid cells with only transient bouts of theta rhythm oscillations rather than continuous oscillations that could maintain a phase code (Yartsev et al., 2011).
What about “space” is important for episodic memory?
2023, Wiley Interdisciplinary Reviews: Cognitive Science