The posterior parietal cortex: Sensorimotor interface for the planning and online control of visually guided movements
Introduction
What role does the posterior parietal cortex (PPC) play in visually guided behavior? This question has been the subject of much research since Vernon Mountcastle and colleagues described in elegant detail neural activity in the PPC related to movements of the eyes and limbs (Mountcastle, Lynch, Georgopoulos, Sakata, & Acuna, 1975). Although Mountcastle and colleagues interpreted this activity as serving largely movement functions, others interpreted similar activity as reflecting higher order sensory or attentional processes (Robinson, Goldberg, & Stanton, 1978). Using experiments designed to control for sensory and movement related activity, Andersen and colleagues showed that the PPC has both sensory and motor properties (Andersen, Essick, & Siegel, 1987). They proposed that the PPC was neither strictly sensory nor motor, but rather was involved in sensory-motor transformations. Findings since this time are consistent with this view, although not always interpreted as such (Bisley & Goldberg, 2003; Bracewell, Mazzoni, Barash, & Andersen, 1996; Calton, Dickinson, & Snyder, 2002; Colby & Goldberg, 1999; Gottlieb & Goldberg, 1999; Mazzoni, Bracewell, Barash, & Andersen, 1996; Powell & Goldberg, 2000; Snyder et al., 1997, Snyder et al., 1998a, Snyder et al., 2000; Zhang & Barash, 2000).
A good deal of research in recent years has focused on the lateral intraparietal area (LIP), which serves a sensory-motor function for saccadic eye movements. As with other areas of the brain, sensory attention and eye movement activation appears to overlap extensively in LIP (Corbetta et al., 1998; Kustov & Robinson, 1996). However, when sensory and motor vectors are dissociated explicitly, both sensory and motor related activity are found in LIP (Andersen et al., 1987; Gnadt & Andersen, 1988; Zhang & Barash, 2000), though other tasks have shown that the prevalence of the latter increases as movement onset approaches (Sabes, Breznen, & Andersen, 2002). This suggests that LIP might best be thought of as a sensorimotor ‘interface’ for the production of saccades. By interface we mean a shared boundary between the sensory and motor systems where the ‘meanings’ of sensory and motor-related signals are exchanged. In this context, attention could play an important role in limiting activation to that portion of the sensorimotor map that corresponds to the most salient or behaviorally relevant object (Gottlieb, Kusunoki, & Goldberg, 1998).
It is currently unclear whether the PPC plays precisely the same role in the planning and control of arm movements as it does in eye movements. Although similarities in these two behaviors do exist, differences in the biomechanical properties of the eye and arm suggest that the planning and control of these behaviors are quite distinct (Soechting, Buneo, Herrmann, & Flanders, 1995), a fact that may be reflected even in the earliest stages of movement planning. Moreover, considerable differences exist in the neural circuitry subserving these two behaviors, even within the PPC. Strong eye movement related activation is typically restricted to regions of the inferior parietal lobule (IPL), i.e. 7a and LIP, while strong arm movement related activity can be found in both the IPL (7a) and the various subdivisions of the superior parietal lobule (SPL) (Battaglia-Mayer et al., 1998; Caminiti, Ferraina, & Johnson, 1996; Marconi et al., 2001), which include dorsal area 5 (PE), PEc, PEa, and the parietal reach region (PRR), which comprises the medial intraparietal area (MIP) and V6a (Fig. 1). In the remainder of this review, we will focus on the role of the SPL, specifically area 5 and PRR, in the planning and control of reaching. It will be argued that, despite strong differences in the biomechanics underlying eye and arm movements, area 5 and PRR serve an analogous function in reaching as LIP serves in saccades, i.e. that of an interface for sensory-motor transformations. This interface appears to be highly plastic, being modifiable by learning, expected value, and other cognitive factors (Clower et al., 1996; Musallam, Corneil, Greger, Scherberger, & Andersen, 2004). Moreover, we will present evidence that area 5 and PRR, and perhaps other parts of the SPL as well, play a role not only in the inverse transformations required to convert sensory information into motor commands but also in the reverse (‘forward’) process as well, i.e. in integrating sensory input with previous and ongoing motor commands to maintain a continuous estimate of arm state. This state estimate is represented in an eye-centered frame of reference and can be used to update present and future movement plans.
It is useful at this point to explicitly define terms that will be used in the remainder of this review. In order to plan a reaching movement the brain must compute the difference between the position of the hand and the position of the target, i.e. “motor error”. Motor error can and may be defined in the motor system in at least two different ways: in terms of a difference in extrinsic or endpoint space, as depicted in Fig. 2, or in terms of a difference in intrinsic space, i.e. as a difference in joint angles or muscle activation levels. In the following section, we start with the assumption that motor error is defined in the PPC in extrinsic space, but we will return to the issue of intrinsic coordinates later in this review.
Hand and target position can each be defined with respect to a number of frames of reference; however, it is currently thought that in order to simplify the computation of motor error, both quantities are encoded at some point in the visuomotor pathway in the same frame of reference. Two possible schemes have been suggested (Fig. 2). In one scheme, target and hand position are coded with respect to the current point of visual fixation—we will refer to this coding scheme as an eye-centered representation, though others have used the terms ‘viewer-centered’, ‘gaze-centered’, or ‘fixation-centered’ to describe similar representations (Crawford, Medendorp, & Marotta, 2004; McIntyre, Stratta, & Lacquaniti, 1997; Shadmehr & Wise, 2005). In a second scheme, target and hand position are coded with respect to a fixed point on the trunk; in Fig. 2 this fixed point is at the right shoulder. We will refer to this representation as ‘body-centered’. As illustrated in Fig. 2, both schemes will arrive at the same motor error (M). However, with either scheme a difficulty arises in assigning a reference frame to M. Consider the case where target and hand position are encoded in eye-centered coordinates. Using conventions from mechanics, one could interpret M, the difference between the target and hand, as a ‘displacement vector’ in eye-centered coordinates. Alternatively, this same vector could be interpreted as a ‘position vector’ for the target in hand-centered coordinates. From a purely descriptive point of view, the distinction is arbitrary. However, from the point of view of the neural representation of sensory-motor transformations, this distinction is important and non-arbitrary. In the following sections we will show that some PPC neurons, i.e. those in PRR, appear to encode both target position and current hand position in eye-centered coordinates. As a result, activity in this area could be interpreted as encoding a ‘displacement vector in eye coordinates’. Other PPC neurons appear to encode reach-related variables without reference to the eye; for these neurons the term ‘target position vector in hand coordinates’ or for brevity, ‘target in hand coordinates (Buneo, Jarvis, Batista, & Andersen, 2002)’ appears to be most appropriate. We will also show that some neurons in the PPC do not appear to represent spatial information in a single reference frame but instead are consistent with an encoding of reach-related variables in both eye and hand coordinates, suggesting they play a crucial role in transforming spatial information between these two reference frames.
It is also important to reiterate at this point what is meant by ‘explicit’ and ‘distributed’ representations. As mentioned above, in order to plan a reaching movement both the position of the hand (H) and target (T) must be known. These two signals can be encoded by neurons in the brain in at least two different ways: separably and inseparably. In separable encodings the two variables H and T are ‘independent’ and can be recovered even after being integrated at the single cell level; in other words, target and hand position can be decoded separately from such a representation. With inseparable encodings, the two variables are encoded in a combined form, and are thus ‘dependent’ and cannot be separated. In the current context, separable would mean a representation in which the response of a cell is a function of both target position and current hand position in the same reference frame, but is not a function of the difference between target and hand position. Stated mathematically:where T is target position and H is current hand position.1 For brevity, we will refer to neurons that encode reaches in this manner as ‘separable’ neurons and those that encode T and H inseparably (Eq. (2)) as ‘inseparable’ neurons. To illustrate what the responses of separable (and inseparable) neurons would look like, Fig. 3A depicts a hypothetical experiment in which a fixating monkey makes reaching movements from each of five horizontally arranged starting positions to each of five horizontally arranged targets located in the row directly above the row of starting positions. Movements made directly ‘up’ from each starting position are labeled in this figure with black vectors. Since the vertical component of the targets and starting positions do not vary in this experiment, activity can analyzed in terms of horizontal components only. The colormaps in Fig. 3B show the responses of several idealized neurons in this experiment, for all combinations of horizontal target and hand position. Activity corresponding to the purely vertical movements shown in Fig. 3A is labeled with white vectors on the colormaps. The leftmost column shows 3 neurons that encode target and hand position separably, in eye coordinates. Each cell is tuned for a target location in the upper visual field but one responds to rightward position (the top cell), another center, and the third leftward (bottom cell). These cells are also tuned for hand locations to the right, center, and left, respectively. In the PPC these responses are often described as a ‘gain field’, in the sense that variations in hand position do not change the tuning for target position but only the overall magnitude or ‘gain’ of the response, and vice versa. Although these neurons respond maximally for a given combination of hand and target position in eye coordinates, they do not in general provide information about motor error in extrinsic space. This information can be obtained however from a suitably large population of such neurons. We will touch on this point again later but suffice to say that a population-based representation of this type would be considered an implicit or ‘distributed’ representation of motor error in extrinsic space, in that the information can only be gleaned from a ‘read-out’ of the population.
Neurons can also encode target and hand position inseparably. An example of inseparable coding would be a representation in which the response of a cell is a function of the difference between target and hand position.2 Stated mathematically:
The rightmost column of Fig. 3 shows the responses of three idealized neurons that encode target and hand position inseparably. These cells respond maximally for a planned movement straight up, but for all positions of the hand in the visual field. In contrast to the separable, eye-centered cells described earlier, these ‘hand-centered’ cells do provide explicit information about motor error in extrinsic space. Single neurons that code sensory or motor variables in this way can provide coarse estimates of a percept or a planned action, though populations of such neurons are required to refine these estimates in the face of neural noise (Pouget, Dayan, & Zemel, 2003). Moreover, although such explicit movement representations do appear to exist in the PPC and elsewhere, it is not necessary to have populations of neurons that encode motor error in this way; a distributed representation can in principle serve the same function (Andersen & Brotchie, 1992; Goodman & Andersen, 1989).
Section snippets
Spatial representations for reaching in the SPL
As stated above, motor error can and may be computed from either body-centered or eye-centered representations of target and hand position. For the past several years we have been conducting experiments aimed at uncovering the scheme that best accounts for the responses of SPL neurons during an instructed delay reaching task. We reasoned that if hand and/or target position are encoded in a particular frame of reference (say eye coordinates), then neural activity should not vary if these
Hand position in eye coordinates
A critical aspect of the direct transformation scheme is the coding of hand position in eye coordinates. However, from both a mathematical and intuitive standpoint, it is impossible to distinguish an encoding of hand position in eye coordinates from, say, an encoding of eye position in hand coordinates. In other words, a cell whose response could be described as coding target and hand position in eye coordinates:where TE is the target position in eye coordinates and HE is the hand
The PPC, online control, and forward models
In the experiments of Batista et al. (1999), Buneo, Jarvis, et al. (2002), movements were performed ‘open loop’ with respect to visual feedback. However, reaching movements are generally made under visually closed-loop conditions. Could a direct transformation scheme work for such movements? Psychophysical studies in humans have clearly pointed to a role for the PPC in the rapid online updating of movements (Desmurget et al., 1999, Pisella et al., 2000). However, in order for the direct
Context-dependent visuomotor transformations
Although the notion of a direct transformation scheme makes intuitive sense and is supported by human psychophysical and monkey neurophysiological studies, it is unlikely that a single scheme can be used in all contexts. Another transformation scheme that has been put forth involves the progressive transformation of target information from retinal to head and ultimately body-centered coordinates. Motor error is then computed by comparing a body-centered representation of target position with a
Conclusion
In this review, we have presented evidence in support of the idea that the PPC acts as a sensorimotor interface for visually guided eye and arm movements. In the brain, and in particular within the PPC, this interface takes the form of a mapping, and in the context of arm movements, this mapping appears to be between representations of target and hand position in eye coordinates, and a representation of motor error in hand-centered coordinates. The mapping is ‘direct’ in the sense that it does
Acknowledgments
We wish to thank the generous support of the James G. Boswell Foundation, the Sloan-Swartz Center for Theoretical Neurobiology, the National Eye Institute (NEI), the Defense Advanced Research Projects Agency (DARPA), the Office of Naval Research (ONR), and the Christopher Reeves Foundation. We also thank Bijan Pesaran, Aaron Batista and Murray Jarvis for helpful discussions.
References (119)
Localization of objects in the peripheral visual-field
Behavioural Brain Research
(1993)- et al.
Reaches to sounds encoded in an eye-centered reference frame
Neuron
(2000) - et al.
Heterogeneity of extrastriate visual areas and multiple parietal areas in the macaque monkey
Neuropsychologia
(1991) - et al.
A common network of functional areas for attention and eye movements
Neuron
(1998) - et al.
Forward modeling allows feedback control for fast reaching movements
Trends in Cognitive Sciences
(2000) The nonvisual impact of eye orientation on eye-hand coordination
Vision Research
(1995)- et al.
Static versus dynamic effects in motor cortex and area-5—comparison during movement time
Behavioural Brain Research
(1985) - et al.
Forward models—supervised learning with a distal teacher
Cognitive Science
(1992) Internal models for motor control and trajectory planning
Current Opinion in Neurobiology
(1999)- et al.
Multisensory spatial representations in eye-centered coordinates for reaching
Cognition
(2002)
Gain modulation: A major computational principle of the central nervous system
Neuron
Eye position effects on visual, memory, and saccade-related activity in areas LIP and 7a of macaque
The Journal of Neuroscience
Spatial maps versus distributed representations and a role for attention
Behavioral and Brain Sciences
Intentional maps in posterior parietal cortex
Annual Review of Neuroscience
Encoding of spatial location by posterior parietal neurons
Science
Neurons of area 7a activated by both visual stimuli and oculomotor behavior
Experimental Brain Research
The influence of the angle of gaze upon the excitability of the light-sensitive neurons of the posterior parietal cortex
The Journal of Neuroscience
A real-time state predictor in motor control: Study of saccadic eye movements during unseen reaching movements
Journal of Neuroscience
Movement parameters and neural activity in motor cortex and area-5
Cerebral Cortex
Parietal representation of hand velocity in a copy task
Journal of Neurophysiology
Adaptation to a visuomotor shift depends on the starting posture
Journal of Neurophysiology
The parietal reach region codes the next planned movement in a sequential reach task
Journal of Neurophysiology
Reach plans in eye-centered coordinates
Science
Early motor influences on visuomotor transformations for reaching: A positive image of optic ataxia
Experimental Brain Research
Early coding of reaching in the parietooccipital cortex
Journal of Neurophysiology
Neuronal activity in the lateral intraparietal area and spatial attention
Science
Contribution of retinal versus extraretinal signals towards visual localization in goal-directed movements
Experimental Brain Research
Gaze effects in the cerebral cortex: Reference frames for space coding and action
Experimental Brain Research
Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey
Journal of Neurophysiology
Motor intention activity in the macaque's lateral intraparietal area. II. Changes of motor plan
Journal of Neurophysiology
Head position signals used by parietal neurons to encode locations of visual stimuli
Nature
Cortical networks for control of voluntary arm movements under variable force conditions
Cerebral Cortex
Direct visuomotor transformations for reaching
Nature
Postural dependence of muscle actions: Implications for neural control
Journal of Neuroscience
Capturing the frame of reference of shoulder muscle forces
Archives Italiennes De Biologie
Parieto-frontal coding of reaching: An integrated framework
Experimental Brain Research
Non-spatial, motor-specific activation in posterior parietal cortex
Nature Neuroscience
The sources of visual information to the primate frontal lobe: A novel role for the superior parietal lobule
Cerebral Cortex
Viewer-centered and body-centered frames of reference in direct visuomotor transformations
Experimental Brain Research
Modest gaze-related discharge modulation in monkey dorsal premotor cortex during a reaching task performed with free fixation
Journal of Neurophysiology
Role of posterior parietal cortex in the recalibration of visually guided reaching
Nature
Space and attention in parietal cortex
Annual Review on Neuroscience
Visual-motor transformations required for accurate and kinematically correct saccades
Journal of Neurophysiology
Curvature of visual space under vertical eye rotation: Implications for spatial vision and visuomotor control
Journal of Neuroscience
Spatial transformations for eye-hand coordination
Journal of Neurophysiology
Stimulation of the posterior parietal cortex interferes with arm trajectory adjustments during the learning of new dynamics
Journal of Neuroscience
Efficient computation and cue integration with noisy population codes
Nature Neuroscience
Role of the posterior parietal cortex in updating reaching movements to a visual target
Nature Neuroscience
Cited by (459)
Developmental changes in the visual, haptic, and bimodal perception of geometric angles
2024, Journal of Experimental Child PsychologyCan we manipulate brain connectivity? A systematic review of cortico-cortical paired associative stimulation effects
2023, Clinical NeurophysiologyCortical maps of somatosensory perception in human
2023, NeuroImage