Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleConfirmation, Sensory and Motor Systems

Interlimb Generalization of Learned Bayesian Visuomotor Prior Occurs in Extrinsic Coordinates

Christopher L. Hewitson, Paul F. Sowman and David M. Kaplan
eNeuro 23 July 2018, 5 (4) ENEURO.0183-18.2018; DOI: https://doi.org/10.1523/ENEURO.0183-18.2018
Christopher L. Hewitson
1Department of Cognitive Science
2ARC Centre of Excellence in Cognition and its Disorders
3Perception in Action Research Centre, Macquarie University, Sydney, 2109, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Christopher L. Hewitson
Paul F. Sowman
1Department of Cognitive Science
2ARC Centre of Excellence in Cognition and its Disorders
3Perception in Action Research Centre, Macquarie University, Sydney, 2109, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Paul F. Sowman
David M. Kaplan
1Department of Cognitive Science
2ARC Centre of Excellence in Cognition and its Disorders
3Perception in Action Research Centre, Macquarie University, Sydney, 2109, Australia
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for David M. Kaplan
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Visual Abstract

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Abstract

Recent work suggests that the brain represents probability distributions and performs Bayesian integration during sensorimotor learning. However, our understanding of the neural representation of this learning remains limited. To begin to address this, we performed two experiments. In the first experiment, we replicated the key behavioral findings of Körding and Wolpert (2004), demonstrating that humans can perform in a Bayes-optimal manner by combining information about their own sensory uncertainty and a statistical distribution of lateral shifts encountered in a visuomotor adaptation task. In the second experiment, we extended these findings by testing whether visuomotor learning occurring during the same task generalizes from one limb to the other, and relatedly, whether this learning is represented in an extrinsic or intrinsic reference frame. We found that the learned mean of the distribution of visuomotor shifts generalizes to the opposite limb only when the perturbation is congruent in extrinsic coordinates, indicating that the underlying representation of learning acquired during training is available to the untrained limb and is coded in an extrinsic reference frame.

  • Bayesian integration
  • interlimb generalization
  • motor learning
  • sensorimotor learning
  • transfer
  • visuomotor adaptation

Significance Statement

Generalization provides unique insights into the motor learning process. However, this type of learning has typically been investigated using fixed or deterministic perturbations and noise-free feedback information, which are not naturalistic. Here, we replicate important findings indicating that information is integrated in a Bayes-optimal manner during sensorimotor learning under uncertainty. We then extend these findings by showing that this learning generalizes to the opposite limb. These results have implications for our understanding of the neural mechanisms of motor learning as well as practical application to the contexts of sport training and motor rehabilitation.

Introduction

Mounting neural, behavioral, and computational evidence suggests that the brain encodes probability distributions and performs probabilistic or Bayesian inference (Rao et al., 2002; Knill and Pouget, 2004; Rao, 2004; Ma et al., 2006; Doya, 2007; Pouget et al., 2013). The Bayesian coding hypothesis (Knill and Pouget, 2004) has been tested primarily in the context of perception (Knill and Richards, 1996; Rao et al., 2002; Weiss et al., 2002; Kersten and Yuille, 2003; Adams et al., 2004; Stocker and Simoncelli, 2006) and multisensory integration (van Beers et al., 1996, 1999; Alais and Burr, 2004; Ernst and Banks, 2002; Ma et al., 2006, Ma and Pouget, 2008; Beierholm et al., 2009; Fetsch et al., 2009, 2011, 2013). However, it has also been investigated in studies of sensorimotor learning, albeit to a lesser extent (Körding and Wolpert, 2004; Tassinari et al., 2006; Izawa and Shadmehr, 2008; Fernandes et al., 2012, 2014). In a seminal study, Körding and Wolpert (2004) demonstrate that subjects can learn to adapt their reaches to the mean of a probability distribution of visual shifts encountered in a modified visuomotor adaptation paradigm and, importantly, can regulate their dependence on this learned distribution according to the current level of sensory uncertainty in the feedback they are provided. This pattern of results is consistent with subjects performing Bayesian estimation during sensorimotor learning. Surprisingly, no subsequent studies have replicated this important finding and only a few have sought to test related questions or extend the paradigm (Wei and Körding, 2010; Fernandes et al., 2012, 2014). In this article, we report on an approximate replication of the original Körding and Wolpert (2004) study and an extension to the context of interlimb generalization (IG).

Generalization, which refers to the process by which experience or training in one context changes performance in another, provides a useful window into the representational changes underlying various forms of sensorimotor learning (Thoroughman and Shadmehr, 2000; Poggio and Bizzi, 2004; Shadmehr, 2004; Paz and Vaadia, 2009). Sensorimotor learning been shown to generalize across similar tasks or conditions using the same limb (intralimb generalization) and across limbs (IG). The extent to which learning generalizes is typically thought to reflect a common neural representation. By evaluating error patterns during generalization, inferences can be made about the reference frame for movement planning and control (Krakauer et al., 1999; Shadmehr, 2004).

With respect to intralimb generalization, it has been repeatedly shown that subjects who learn to adapt their movements in response to altered visual feedback for a restricted set of movement directions can generalize this learning to untrained directions (Bedford, 1989; Flanagan and Rao, 1995; Ghilardi et al., 1995; Wolpert et al., 1995; Ghahramani et al., 1996; Krakauer et al., 1999, 2000; Vetter et al., 1999; Malfait et al., 2002; Wu and Smith, 2013). A general finding is that visuomotor learning is represented in extrinsic (e.g., screen-based) coordinates (Cunningham and Welch, 1994; Imamizu and Shimojo, 1995; Krakauer et al., 1999, 2000; Vetter et al., 1999; Ghez and Krakauer, 2000). IG studies similarly indicate that visuomotor perturbations are represented and learned in extrinsic coordinates (Wang and Sainburg, 2003, 2004). However, several recent studies indicate that a combination or mixture of reference frames may be involved (Sober and Sabes, 2003, 2005; McGuire and Sabes, 2009; Brayanov et al., 2012; Carroll et al., 2014; Poh et al., 2016).

Interestingly, Körding and Wolpert (2004) tested neither form of generalization in their original study, but instead only probed whether subjects could learn the initial training task. Relatively little is known about how prior learning of a stochastic visuomotor perturbation involving a probability distribution of visuomotor rotations or shifts generalizes (Tassinari et al., 2006; Fernandes et al., 2012, 2014). In one of the most pertinent studies to date, Fernandes et al. (2012) investigated how different learned visuomotor priors generalize to new reach directions by having separate groups of subjects adapt to different distributions of visuomotor rotations with the same mean (30°) but different standard deviations (SD = 0°, 4°, or 12°). They found that learning was slower and less complete when the SD of the imposed distribution was higher, but interestingly the generalization curves were unaffected. In a subsequent study, Fernandes et al. (2014) replicated their earlier intralimb generalization findings and also showed that reliance on visual feedback about the current perturbation (the likelihood distribution) is greater when the prior distribution of visuomotor rotations imposed during training is wider and therefore associated with more uncertainty. Despite these investigations, generalization of a statistical prior across limbs has not been tested.

In the current study, we test the hypothesis that the mean of a distribution of stochastic visuomotor perturbations learned with one limb generalizes to the other limb. Based on analogous studies involving deterministic visuomotor perturbations, we predict that the representation of this learning is encoded in extrinsic coordinates.

Materials and Methods

Participants

A total of 35 right-handed subjects (22 males, 13 females, age 17–49 years) with normal or corrected to normal vision and no history of motor impairments participated in the experimental study. All subjects gave informed consent before the experiment and were either paid and recruited from the University’s Cognitive Science Participant Register or were University students participating for course credit. All experimental protocols were approved by the University’s Human Research Ethics Committee (protocol number: 5201600282). Subjects were randomly assigned to one of three experimental groups. Seven subjects participated in experiment 1, which consisted of a pseudo-replication of Körding and Wolpert (2004)’s stochastic visuomotor adaptation task. Fourteen subjects participated in experiment 2, which sought to test IG of visuomotor learning using the same basic task from experiment 1, but with an additional set of task conditions designed to test the extent to which learning with one limb is available to the untrained limb and the nature of the reference frame in which the initial learning occurs. Seven subjects also participated in an additional control for experiment 1, which consisted of a variation of the basic visuomotor adaptation task used in experiment 1.

Experimental procedures

A unimanual KINARM endpoint robot (BKIN Technologies) was used in all experiments (Fig. 1). The KINARM robot has a single graspable manipulandum that permits unrestricted 2D arm movement in a horizontal 2D plane (the movement plane). A projection-mirror system facilitates presentation of visual stimuli that appear in the movement plane. Subjects received visual feedback about their hand position via a cursor (solid white circle, 1 cm in diameter) that was controlled in real time by moving the manipulandum. Mirror placement and an opaque apron attached to the participant ensured that visual feedback from the real hand was not available for the duration of the experiment.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

A–F, Experimental paradigm. G, Experimental workspace with example hand and cursor paths shown for a representative trial when a 2-cm lateral visual shift is applied. Dashed white lines indicate feedback windows. H, Midpoint feedback conditions with different amounts of visual uncertainty. Panels G, H after Körding and Wolpert (2004).

Although Körding and Wolpert (2004) had subjects perform optically-tracked pointing movements on a horizontal table surface rather than reaches with a robotic manipulandum, the task kinematics in the current study were closely matched. The task dynamics were also similar since the small changes produced by the manipulandum with respect to inertia and friction are negligible compared to that of the arm itself during unrestricted reaching and pointing movements. More generally, robotic manipulanda have been successfully used to investigate visuomotor adaptation in a number of previous studies (Saijo and Gomi, 2010; Hayashi et al., 2016; Leow et al., 2017) including studies highly similar to the current one (Wei and Körding 2010).

Subjects were instructed to perform fast and accurate goal-directed reaching movements with the dominant (right) arm using cursor feedback whenever it was available. Reaches were from a start target (solid red circle, 1 cm in diameter) located at the center of the workspace to a single reach target (solid blue circle, 1 cm in diameter) located 20 cm away (Fig. 1G). When subjects moved the cursor within the boundaries of the start target its color changed from blue to red and the reach target appeared, indicating the start of a trial. Subjects were free to reach at any time after the start target color changed. Once the cursor exited the start target, cursor feedback was extinguished and laterally shifted to the right of the true hand position (positive in the x-plane) by an amount drawn at random on each trial from a Gaussian distribution with mean of 1 cm and SD of 0.5 cm (the true prior). At the midpoint of the movement, displaced cursor feedback was provided for 100 ms (midpoint feedback).

To test whether Bayesian integration occurs during sensorimotor learning, following Körding and Wolpert (2004), the reliability of the sensory feedback information provided about the true cursor position at the reach midpoint was varied by introducing different amounts of visual noise or blur on each trial. Changing the degree of uncertainty associated with the current sensory evidence (the likelihood) allowed us to assess the subjects’ reliance on their previously experienced distribution of shifts (the prior). One of four visual uncertainty conditions (Embedded Image , Embedded Image , Embedded Image , Embedded Image ; Fig. 1H) was selected at random on each trial according to a ratio of 3:1:1:1 previously used by Körding and Wolpert (2004). In the zero uncertainty condition (Embedded Image ), midpoint feedback was a single white sphere (1 cm in diameter), identical to the initial cursor. In the moderate uncertainty condition (Embedded Image ), midpoint feedback was one of ten randomly generated point clouds comprised of 50 small translucent white spheres (0.2 cm in diameter) distributed as a two-dimensional Gaussian with a SD of 1 cm and a mean centered over the true (displaced) cursor position on the current trial. In the large uncertainty condition (Embedded Image ), everything was the same as the moderate uncertainty condition (Embedded Image ) except that the point clouds had a SD of 2 cm. In the unlimited uncertainty condition (Embedded Image ), no midpoint feedback was provided. Cursor feedback was again extinguished for the remainder of the reach to the end target. Cursor feedback at the endpoint of the reach (endpoint feedback) was provided only in the zero uncertainty (Embedded Image ) condition for a duration of 100 ms. After movement offset, there was a delay of 150 ms before the start target reinitialized the next trial by changing color from red back to blue. The maximum allowable time to complete a reach was 4000 ms. Irrespective of the cursor’s position along the x-axis, if subjects did not cross the lower bound of the end target along the y-axis (Fig. 1G, dashed line) the trial would time out. Timeouts were signaled by the disappearance of the end target and the start target changing back to blue.

Experiment 1: Bayesian sensorimotor learning (BSL)

The primary aim of experiment 1 was to test whether subjects learn to compensate for the imposed stochastic visuomotor perturbation (lateral shifts drawn from a distribution with fixed mean and SD) so that we could then probe whether, and the conditions under which, this learning generalized to the untrained limb. A secondary aim was to provide a close or approximate replication of the findings reported by Körding and Wolpert (2004). Before the experiment started, each subject performed 10 familiarization trials in which cursor feedback was always provided and no lateral shift was imposed. Further, for two of the seven BSL subjects tested, an additional Baseline task was run to measure each subject’s baseline motor variability and directional biases when reaching with each hand. The Baseline task used the same basic paradigm as the other experiments and consisted of the following sequence: 10 right hand (RH) feedback trials (cursor feedback always provided; no lateral shift imposed), 10 RH no-feedback trials (no cursor feedback provided; no lateral shift imposed), 10 left hand (LH) feedback trials (cursor feedback always provided; no lateral shift imposed), and 10 LH no-feedback trials (no cursor feedback provided; no lateral shift imposed). After completing the Familiarization and Baseline tasks, all subjects completed 2160 trials of the task with their RH (Fig. 2A,B). To preserve the 3:1:1:1 ratio between visual feedback conditions, we ran 1080 trial blocks (540 Embedded Image :180 Embedded Image :180 Embedded Image :180 Embedded Image ). For the purposes of comparison with Körding and Wolpert (2004), and comparison with data from the IG experiment (experiment 2) we nominally defined the training phase as the first 1080 trials in each session and the testing phase as the second 1080 trials in each session (Fig. 2A). There were no objective differences between these phases in the experiment.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Experimental design. A, In experiment 1, training and testing phases were nominally defined as the first and last 1080 trials, respectively. B, C, In experiment 2, the training phase consisted of 1080 RH trials followed by a testing phase of 1080 LH trials. In the congruent-extrinsic (CE) condition, the imposed visuomotor perturbation was a rightward lateral shift for both RH and LH trials. In the congruent-intrinsic (CI) condition the imposed visuomotor perturbation was a rightward lateral shift for RH trials and a leftward lateral shift for LH trials, both of which require elbow flexion. Mean endpoint (Embedded Image ) for trials 980–1080 (RH late training) was used to compute the percentage of adaptation. The percentage of IG was computed by dividing the mean endpoint (Embedded Image ) for trials 980–1080 (RH late training) by the mean endpoint (Embedded Image ) for trials 1080–1180 (LH early testing).

Experiment 2: IG

The aim of experiment 2 was to build on the results of experiment 1, and test whether the learning exhibited in the BSL task generalizes from one limb to the other. Further, we were interested in testing whether the initial visuomotor learning that occurs during training is represented in extrinsic or intrinsic coordinates. Like experiment 1, experiment 2 started with 10 trials of a familiarization task in which cursor feedback was always provided and no lateral shift was imposed. After completing a training phase with their RH (1080 trials), subjects completed a testing phase (1080 trials) using their LH in which they experienced cursor feedback sampled from the same Gaussian distribution as experienced previously with the RH (mean of 1 cm, SD of 0.5 cm; Fig. 3D). To assess the reference frame in which transfer occurs, seven subjects experienced a congruent-extrinsic (CE) condition in which the cursor was shifted in the same visual direction across both the training phase with the right arm (Fig. 1C) and the testing phase with the left arm (Fig. 1D). By design, the imposed visuomotor perturbation was congruent in extrinsic (screen-based) coordinates (rightward lateral shift), yet incongruent in intrinsic coordinates (requiring an elbow joint flexion in the right arm and an elbow joint extension in the left to compensate for the shift). Another seven subjects experienced a congruent-intrinsic (CI) condition in which the cursor was shifted in opposite visual directions for each arm, and more specifically a rightward shift for the right arm during the training phase (Fig. 1E) and leftward shift for the left arm during the testing phase (Fig. 1F). This time, the visuomotor perturbation imposed across both the training and testing phases was congruent in intrinsic coordinates (requiring joint flexion in both right and left arms), yet incongruent in extrinsic coordinates.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Computational models considered for experiment 1. The average lateral cursor deviation from the target (cursor error) as a function of the imposed shift for the models. Full compensation model (A), minimal mapping model (B), and Bayesian estimation model (C). (Transparent bands indicate the relative degree of variability in estimation.) The colors of the linear fits correspond to the visual condition (matching Fig. 1H), as do the bands of variability in C. D, The experimentally imposed prior distribution of shifts is Gaussian with a mean of 1 cm (in black). The probability distribution of possible visually experienced shifts under the clear, moderate, and large uncertainty conditions are represented with solid lines (colors as in Fig. 1H) for a trial in which the imposed shift is 2 cm. The Bayes-optimal estimate of the shift that combines the prior with the evidence is represented by dashed lines (colors also as in Fig. 1H). After Körding and Wolpert (2004).

Data analysis

Kinematic data including hand position and velocity was recorded for all trials using BKIN’s Dexterit-E experimental control and data acquisition software (BKIN Technologies). Hand position data were recorded at 200 Hz and logged in Dexterit-E. Custom scripts for data processing and analysis were written in MATLAB. Hand position, velocity, and cursor shift values were extracted from the c3d files in MATLAB. A combined spatial- and velocity-based criterion was used to determine movement offset and corresponding reach endpoints (Georgopoulos et al., 1982; Moran and Schwartz, 1999; Scott et al., 2001). Specifically, movement offset was defined as the first point in time t at which the movement dropped below a minimum velocity threshold (<5% of peak velocity), after a minimum reach of 19 cm from the start target in the y-plane. Reach endpoints were defined as the x and y values at time t. The additional spatial criterion ensured that data from the start of the trial (also <5% of peak velocity) was not included in subsequent analysis.

Since the visual shift was systematically applied along the x-axis, the primary measure of the subject’s estimate of the visuomotor perturbation (the estimated prior) was their mean hand position (x-coordinate only) at the end of the reach (henceforth endpoint) for all reaches completed during the unlimited uncertainty (Embedded Image ) condition. Because no midpoint feedback is provided during unlimited uncertainty (Embedded Image ) trials, these provide a relatively uncontaminated measure of the estimated prior. In experiment 1, the mean endpoint was computed across the entire testing phase (trial 1080–2160; testing phase; Fig. 2A). In experiment 2, the degree of generalization was assessed by comparing the mean endpoint (Embedded Image ) at the end of the training phase (trial 980–1080; late training phase) with the mean endpoint (Embedded Image ) at the start of the testing phase (trial 1080–1180; early testing phase) for the respective groups.

The second measure of statistical learning was cursor deviation from target at movement offset as a function of the applied shift (cursor error). We compared the slopes of the linear fits for these plots, stratified by visual uncertainty condition, to determine the degree to which subjects compensated for visual uncertainty by changing their reliance on their stored prior (Körding and Wolpert, 2004). In experiment 1, cursor error as a function of shift (slope) was determined by averaging across the entire testing phase (trials 1080–2160). If subjects compensate fully for the visual feedback, then the average deviation from target for all conditions in which visual feedback is provided should be zero. If, however, subjects integrate both the learned prior and current visual evidence while performing the task, then endpoints should be biased toward the mean of the prior and should depend on sensory uncertainty (Körding and Wolpert, 2004). Accordingly, reach endpoints should be more biased toward the mean of the prior when sensory uncertainty is high (reflecting a higher weighting of this information) than when sensory uncertainty is low (reflecting a stronger reliance on or lower weighting of this information). Hence, if subjects perform Bayesian estimation, a linear relationship is predicted between cursor error and the imposed shift. More specifically, the linear fit should intercept the abscissa at the mean of the prior (1 cm) and have a slope that increases as a function of visual uncertainty.

A repeated measures ANOVA with planned pairwise comparisons was used to analyze mean endpoints across all subjects within experimental groups and the slopes for all experiments. The Mauchley test was used to assess the sphericity of repeated measures effects of visual condition as it constitutes a four-level factor. If sphericity was violated, Greenhouse–Geisser degree of freedom corrections were applied. The significance level for all non-corrected contrasts was α < .05. Statistical analysis was performed using SPSS v22.0 for Windows.

The same dependent measures were also used to test IG in experiment 2. The predictions of experiment 2 relate to the reference frame in which the initial learning occurred. If the learning that occurs during the training phase is represented in an extrinsic reference frame, this predicts that generalization will be relatively strong in the CE condition and relatively weak in the CI condition. If the learning that occurs during the training phase is represented in an intrinsic reference frame, this predicts that generalization will be relatively strong in the CI condition and relatively weak in the CE condition. IG was quantified according to the following generalization equation (Shadmehr and Mussa-Ivaldi, 1994; Wang and Sainburg, 2005; Brayanov et al., 2012): Embedded Image (1)

Finally, it is important to note that since we were primarily interested in assessing the degree of generalization of the learned prior, only endpoints from the (Embedded Image trials were used in the analysis (as in experiment 1). In these trials, the influence on the prior is relatively uncontaminated by current sensory evidence (see above, Experimental procedures).

Model predictions

For experiment 1, following Körding and Wolpert (2004), we considered three models of sensorimotor integration reflecting different computational strategies that subjects could use to reach accurately to the target on the basis of the visual feedback provided. One possibility is that subjects fully compensate for the sensed lateral shift (full compensation model; Fig. 3A). According to this model, increasing the uncertainty of the feedback for an imposed shift would increase endpoint variability (variance) without changing the mean. Importantly, this model does not require subjects to estimate either visual uncertainty or the prior distribution of shifts applied. The minimal mapping model involves an iterative mapping from visual feedback about cursor error to an estimate of the imposed shift. This crucial error signal can be reduced over repeated trials, and an accurate estimate of the shift can be attained. While this model predicts a mean endpoint of 1 cm to the left of the target (for a 1 cm rightward shift), indicating that the mean of the prior had been learned, it does not require an explicit representation of either the prior distribution or visual uncertainty (Körding and Wolpert, 2004). All that is required to learn this mapping is information about cursor error at the end of the movement. However, in our paradigm, cursor error is only provided for the clear feedback condition (Embedded Image ). Therefore, a mapping may only be learned based on this condition and then applied to all other conditions (Embedded Image hence the term minimal, for minimal condition mapping). Importantly, the minimal mapping model predicts a compensation pattern that is the same for all trials, regardless of visual uncertainty (Fig. 3B). The final model we considered is the Bayesian estimation model, according to which subjects use information about the prior distribution and the uncertainty associated with the visual feedback to estimate the imposed shift. The posterior probability distribution can be obtained by applying Bayes’ rule as follows:Embedded Image (2)

Where Embedded Image is the imposed shift, Embedded Image is the sensed shift (the visual evidence), and Embedded Image is the prior distribution of shifts. Assuming that the noise of each measurement is independent and Gaussian (Fig. 3D) then the optimal estimate of the imposed shift is a sum of the mean of the prior and the sensed feedback positionEmbedded Image weighted by their relative variances [(Embedded Image and Embedded Image , respectively]:Embedded Image (3)

Where (Embedded Image ) and Embedded Image ) is the “weighting” (degree of influence) attributed to the prior and visual information relative to their respective variance. Accordingly, the joint variance (Embedded Image of the posterior is given by:Embedded Image (4)

The Bayesian estimation model predicts that as visual uncertainty increases, the subject’s estimate of the imposed shift moves away from the sensed shift and tends toward the mean of the learned prior distribution (Fig. 3D). For example, consider an imposed shift of 2 cm. Given sensory uncertainty there are multiple shifts that can produce a sensed shift of ∼2 cm (i.e., within the range of 1.8–2.2 cm). However, if visual uncertainty is a function of Gaussian noise on the visual feedback, then, according to the Bayesian model, the most probable shift is <2 cm, due to the influence of the learned prior. Hence, the estimated shift will tend toward the prior by an amount that depends on both the prior distribution and the degree of uncertainty in the visual feedback (Fig. 3D). Furthermore, without visual feedback (Embedded Image ) the estimate should approximate the mean of the learned prior (because the likelihood distribution is flat).

Based on the previous results of Körding and Wolpert (2004), we predicted that subjects would not only learn the prior distribution of imposed shifts but would apply it in a fashion consistent with the Bayesian estimation model. Accordingly, we predicted that the (sign-inverted) mean endpoint across the entire testing block (trials 1080–2160) would closely approximate the mean of the learned prior of 1 cm, and that subjects would integrate the degree of visual uncertainty when estimating the imposed shift. It was also expected that cursor error would increase as a function of increasing visual uncertainty as depicted in Figure 4A, where increasing error is indicated by a larger slope. That is, subjects will estimate the imposed shift with a greater degree of accuracy during trials in which visual feedback is more reliable, and with accuracy decreasing across less reliable visual feedback conditions (accuracy during σ0 > σm > σL > σ∞).

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Effect of visual uncertainty. A, Cursor error at the end of the trial as a function of the imposed shift for a representative subject. Colors as in Figure 1H. Values represent Cartesian (screen) coordinates. Horizontal dotted lines indicate the full compensation model prediction and diagonal dashed lines indicate the minimal mapping model prediction. Solid lines provide the Bayesian estimation model fits to the data as a function of sensory uncertainty. Due to trial scheduling statistics, applied shift values differ slightly across each subject. To reflect this difference, error bars denote SD instead of SEM. Importantly, every subject experienced the same overall statistical distribution of shifts during training and testing. B, Slopes of the linear fits for all subjects in experiment 1. The first bar in each grouping corresponds to the subject represented in panel A.

Results

Experiment 1

The mean endpoint (Embedded Image ) across the experimental group was -1.51 ± 0.15 cm (mean ± SD) to the left of the target indicating that subjects had learned to compensate for the average experienced shift of +1 cm (mean of the imposed prior) over the ensemble of trials. As full compensation for the average shift would be -1 cm, the observed mean overshoot of -0.51 cm was unexpected. One possible explanation for this relates to the size of the cursor relative to that of the reach target and the associated spatial tolerance built into correct trials. The use of a 2 cm in diameter reach target and a 1 cm in diameter cursor meant that although subjects were instructed to reach as accurately to the target as possible, all trials in which the cursor stopped anywhere within the circumference of the target were counted as correct. Any pre-existing bias in any direction up to ± 1 cm (the radius of the target) might therefore remain uncorrected through the experiment. To determine whether this was the case, we collected baseline reach data for the last two of the seven subjects in experiment 1 and performed baseline corrections on their endpoint data. Baseline-adjusted mean endpoint averaged over those two subjects indicate a compensation of -1.16 ± 0.12 cm (mean ± SD) providing a closer correspondence to the mean of the true prior and the results reported by Körding and Wolpert (2004).

Next, we examined the relationship between the imposed shift and cursor error (Fig. 4). Cursor error as a function of shift was averaged across 11 bins of applied shift values and plotted for all visual feedback conditions (representative subject in Fig. 4A). The slope of the linear fit was analyzed to investigate the relationship between cursor error and the imposed shift (Fig. 4B). Mauchly’s test for sphericity was violated (p = .01), requiring Greenhouse–Giesser correction. According to the corrected repeated measures ANOVA, slope increased significantly (F(3,81) = 14.1 p = .002) with increasing uncertainty in the visual feedback. A planned comparison of the slopes between visual conditions indicated significant differences for all visual conditions except the Embedded Image conditions which was similar (p = .46) to condition Embedded Image . This pattern, in which reliance on the learned prior is inversely related to changes in visual uncertainty, is consistent with the Bayesian estimation model and inconsistent with both the full compensation and minimal mapping models.

The influence of visual uncertainty on cursor error was also evident when averaged across all the subjects tested (n = 7). The mean slope increases significantly with increasing visual uncertainty across three of the conditions (σM, σL, σ∞), although not for the clear condition (σ0); Fig. 4B). One possible explanation for this is that the clear and moderate uncertainty conditions provide highly similar information about the imposed shift (Fig. 1H). Although the stimuli used for the moderate uncertainty condition were randomly generated Gaussian point cloud distributions of 25 small translucent spheres with a SD of 1 cm, the origin of the moderate uncertainty feedback is, on qualitative inspection, still relatively easy to discern. Hence, from the subject’s perspective, there might have been little effective difference between the clear and moderate uncertainty feedback conditions, which could have produced the similar slopes observed across these two conditions. Nevertheless, the influence of visual uncertainty on cursor error remains significant for all other comparisons.

Experiment 2

In this experiment, a training phase of 1080 RH trials was followed by 1080 LH trials for both the CE and CI conditions (Fig. 2B,C). The percentage of IG was determined by comparing the mean endpoint (Embedded Image ) during late training (trials 980–1080) against the mean endpoint (Embedded Image ) from the early LH trials (1080–1180), as per Equation 1. To rule out the possibility that training differences between subjects participating in Experiments 1 and 2 could influence our results, mean endpoints during late RH training (980–1080) were compared across all experimental groups. Endpoints were similar: -1.52 ± 0.2, -1.23 ± 0.32, and -1.34 ± 0.22 cm (mean ± SD in all cases) for all groups (Fig. 5A), and the observed differences were not significant (BSL vs CE, p = .069; CE vs CI, p = .31; BSL vs CI, p = .15). This result indicates that learning of the prior distribution for CE and CI subjects in experiment 2 was comparable to the learning that occurred for subjects in experiment 1. In addition, the mean endpoint during late training (980–1080) for BSL subjects was similar to both late training (1980–2160), and the entire block of 1080 trials from the testing phase (early vs late, p = .63; early vs entire, p = .99; late vs entire, p = .64), thus reinforcing the use of late testing reaches (980–1080) as a suitable indicator of prior learning.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

A comparison of endpoints (Embedded Image ) for all experimental groups. A, Mean endpoint during late RH training. B, Mean endpoints during early LH testing. All p values represent significance levels of independent samples t tests. Error bars denote SEM. Color coding is the same as in Figure 2.

During early LH trials (1080–1180), a mean endpoint of -1.22 ± 0.1 cm (mean ± SD) was observed for subjects in the CE group, which is similar to BSL endpoints during the same period (p = .094; Fig. 5B). This indicates strong (98%) generalization of the learned prior when the visual perturbation is congruent in an extrinsic reference frame. In contrast, an endpoint of 0.31 ± 0.26 cm (mean ± SD) during early LH reaches was observed for CI subjects, which is significantly different to both CE and BSL endpoints during the same period (p = .0001 in both cases; Fig. 5B). This indicates that the learned prior incompletely generalized (23%) when the perturbation is congruent only in an intrinsic reference frame.

Although the mean endpoint averaged across the first 100 trials of the testing phase provides one measure of generalization, it may also reflect some degree of new learning with the opposite limb. We therefore performed a moving average window analysis (window size = five trials) on early LH reaches (1080–1180), which provides a higher temporal resolution measure (Fig. 6A,B). We also ran a post hoc analysis which confirmed that the selected window size provided a representative (relative frequency-preserving) sampling of visual uncertainty conditions including Embedded Image trials and representative distributions of shift values. For the CE group, the moving average shows a mean endpoint of -0.56 ± 0.17 cm (45%) over the first five trials, -0.86 ± 0.29 cm (68%) over the next five trials, before the mean endpoints stabilize from trial 30 onwards and become statistically indistinguishable across the remaining trial windows in the testing phase (1180–2160). The mean endpoint during this part of the testing phase was -1.22 cm ± 0.24 cm (Fig. 6C). For the CI group, the moving average shows a mean endpoint of -0.60 ± 0.18 cm (45%) in the first five trials and -0.75 ± 0.23 cm (55%) between 5 and 10 trials, with endpoints reaching a plateau at a mean of 0.37 ± 0.14 cm (30%) after 100 trials (Fig. 6B) which persisted for the remainder of the testing phase. The mean endpoint during this part of the testing phase was 0.51 ± 0.19 cm (Fig. 6D).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Moving average plot for early LH (Embedded Image ) trials. A, B, A moving average of endpoints across the first 100 trials in the LH testing phase for CE (A) and CI (B). Each bar represents the average across seven subjects using a window size of five trials. Error bars denote SD not SEM to reflect the disparate number of values included in each window. All p values represent significance levels from independent Welch t tests. C, D, Average endpoints for CE (C) and CI (D) conditions for the remaining reaches in the testing phase (1180–2160). In C, D, error bars denote SEM. The dashed line represents the mean across reaches 1180–2160. Color coding is the same as in Figure 2.

Interestingly, there was no significant difference between mean endpoints for both CE and CI groups after five trials (p = .71) or after 10 trials (p = .39). Both CE and CI groups show the same percentage generalization of the prior mean (45%) during early LH reaches. Importantly, this pattern of results is not readily interpretable in terms of new learning or adaptation with the opposite limb. Since new learning would be expected to occur in the same reference frame as initial learning, this should produce endpoints that diverge rather than converge across the CE and CI groups. For example, if initial and new learning of the imposed perturbation are encoded in extrinsic coordinates, endpoints should tend toward -1 cm for the CE group and +1 cm for the CI group. By contrast, if initial and new learning are encoded in intrinsic coordinates, endpoints should tend toward +1 cm for the CE group and -1 cm for the CI group. No divergence in endpoints was observed during early LH reaches. Instead, a significant difference between CE and CI groups only begins to emerge after 15 trials (p = .003) and is maintained over the remaining trials, indicating that new learning involving the opposite limb does eventually take place.

Discussion

In the current study, we demonstrate that subjects integrate their current level of uncertainty about the available visual evidence with the prior distribution learned from the task to generate motor behavior that optimally compensates for the imposed shift. In other words, they perform Bayesian integration during sensorimotor learning. Our results therefore replicate those of Körding and Wolpert (2004). We also wanted to go beyond simply replicating these important results by further probing the nature of the underlying representations that are learned during the task. By investigating generalization of the learned prior distribution to the other limb, we were able to assess the reference frame in which initial learning occurs.

Both early CE and CI endpoints suggest the prior learned during RH adaptation is encoded in an extrinsic reference frame and this representation is available to the opposite limb. This finding parallels a number of earlier results demonstrating that intralimb generalization of visuomotor perturbations occurs in extrinsic coordinates (Flanagan and Rao, 1995; Wolpert et al., 1995; Vetter et al., 1999; Krakauer et al., 2000) and IG occurs in extrinsic coordinates (Imamizu and Shimojo, 1995; Sainburg and Wang, 2002; Wang and Sainburg, 2004, 2005; Taylor et al., 2011). Nevertheless, it differs from several more recent findings. For instance, Carroll et al. (2014) report strong and immediate interlimb transfer only when the (non-stochastic) visuomotor perturbation was congruent across both intrinsic and extrinsic reference frames. Transfer was limited when the visuomotor perturbation was only congruent in a single reference frame (19% during an extrinsic-congruent condition and 8% during an intrinsic-congruent condition). An important difference between the paradigm employed by Carroll et al. (2014) and the current study is that they used an isometric force aiming task in which subjects generated small forces or torques in their index finger that were in turn mapped into cursor movements. Isometric movements, which by definition involve muscle contraction without corresponding changes in joint angle and muscle length, differ from natural multi-joint movements both in terms of muscle activity and proprioception (Sergio et al., 2005). It is therefore plausible that visuomotor learning in an isometric task, which involves learning new dynamics, more likely requires coding in intrinsic, joint-based coordinates (Shadmehr and Mussa-Ivaldi, 1994; Rotella et al., 2015). Because our paradigm does not require subjects to learn a new mapping from forces into cursor movements, this could explain the lack of alignment between our findings and those of Carroll et al. (2014).

More recently, Poh et al. (2016) report stronger transfer when the visuomotor perturbation is congruent in both extrinsic and intrinsic coordinates as compared to when the perturbation is aligned in only a single extrinsic or intrinsic reference frame. Because they used a more standard (non-isometric) reaching task, the above considerations do not apply. Although their results appear at odds with those reported here, one important difference is that they focus on the degree of transfer relatively late in the testing phase with the untrained limb (specifically the last two blocks of “probe” trials). By contrast, our primary focus was on early transfer. Although paradigm differences make a direct comparison difficult, the pattern of transfer Poh et al. (2016) observed in early blocks of probe trials appears less consistent with their mixed reference frame conclusion than the transfer pattern observed in late blocks (Poh et al., 2016, their Fig. 4, p 1245).

Several other features of our IG data warrant discussion. One especially striking feature is the rapid adaptation following immediate generalization observed in both CE and CI groups. This may suggest the operation of cognitive strategies or heuristics, which can occur at fast timescales (Malfait and Ostry, 2004; Hwang et al., 2006). It has recently been argued that generalization between effectors and across workspaces may involve both implicit and explicit learning processes (Taylor and Ivry, 2013; Taylor et al., 2014; McDougle et al., 2015). With these distinct learning processes in mind, Poh et al. (2016) investigated the contribution of explicit processes to the transfer of visuomotor learning and found that explicit learning is typically encoded in extrinsic coordinates and is fully available early during opposite limb reaches. If explicit cognitive strategies are recruited, an early and abrupt error-corrective switch is predicted corresponding to the time at which subjects explicitly recognize a change in task context and adopt a novel explicit strategy (e.g., reach to the right of the target). This explicit strategy might help subjects achieve relatively rapid compensation in the task in contrast to the slower learning expected if an implicit, error-based process is exclusively relied on. Although an interesting source of speculation, the current paradigm was not designed to disentangle implicit and explicit learning processes.

Another interesting result is the learning plateau exhibited in the CI reaches over the course of the testing block (Fig. 6D). One possible explanation for this CI-specific effect might be anterograde interference. Interference has been demonstrated when a counter-rotation equal in magnitude but opposite in direction is learned shortly after an initial visuomotor rotation is learned and consolidated in memory (Wigmore et al., 2002; Krakauer et al., 2005). Once consolidation occurs, the newly acquired internal model is thought to become increasingly resistant to modification by a competing model (Krakauer, 2009). Not only does consolidation commence rapidly (significant consolidation after ∼5 min), but it appears to strengthen as a function of time and is strongly correlated with number of adaptation trials performed (Krakauer et al., 2005). Given that our subjects adapted to the extrinsically encoded perturbation over a large number of trials (n = 1080), consolidation may make the learned prior more resistant to change. Accordingly, consolidation predicts rapid and early stabilization toward a mean endpoint of -1 cm over the course of LH reaches for the CE group. In contrast, for the CI group, it is plausible that the consolidated prior is erroneously applied to visual shifts that are incongruent in extrinsic coordinates and remains difficult to unlearn leading to the observed plateau at a mean of 0.51 cm by the end of the testing block. Additional experiments are required to determine whether the anterograde interference hypothesis has merits.

Although the current study provides valuable information about the reference frame in which the mean of the learned prior generalizes across limbs, a number of important questions remain open. Since we were interested in replicating the results of Körding and Wolpert (2004), elements of their paradigm were preserved in our extension to the context of IG which placed limitations on the questions we could probe. For example, the current paradigm did not allow us to ask whether immediate generalization increases when the imposed visuomotor perturbation is congruent across both extrinsic and intrinsic reference frames (Carroll et al., 2014). Another limitation is that our study, like that of Körding and Wolpert (2004), was not designed to address the extent to which the visual likelihood was learned (Sato and Körding, 2014). Relatedly, our paradigm was not optimised to investigate likelihood integration when subjects switched to the untrained limb. Since generating our slope plots (Fig. 4) requires a full Gaussian distribution of imposed shift values for each visual uncertainty condition, it was not possible to address this question with the current design. Future studies with modified designs are required to address these and other important questions about Bayesian integration in sensorimotor learning.

In this study, we extended the findings of Körding and Wolpert (2004) to the context of interlimb generalization. We found that, in our task, the learned prior is available to the untrained limb and is coded in an extrinsic reference frame. These findings open pathways for future investigation into the nature of statistical learning in sensorimotor adaptation.

Footnotes

  • The authors declare no competing financial interests.

  • This work is supported by Australian Research Council Grants DE130100868 and DP170103148 and the Australian Research Council Centre of Excellence for Cognition and its Disorders Grant CE110001021 (to P.F.S.) and by the Australian Research Council Centre of Excellence for Cognition and its Disorders Grant CE110001021 (to D.M.K.).

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Adams WJ, Graf EW, Ernst MO (2004) Experience can change the ‘light-from-above’ prior. Nat Neurosci 7:1057–1058 . doi:10.1038/nn1312 pmid:15361877
    OpenUrlCrossRefPubMed
  2. ↵
    Alais D, Burr D (2004) The ventriloquist effect results from near-optimal bimodal integration. Curr Biol 14:257–262. doi:10.1016/j.cub.2004.01.029 pmid:14761661
    OpenUrlCrossRefPubMed
  3. ↵
    Bedford FL (1989) Constraints on learning new mappings between perceptual dimensions. J Exp Psychol Hum Percept Perform 15:232–248. doi:10.1037/0096-1523.15.2.232
    OpenUrlCrossRef
  4. ↵
    Beierholm UR, Quartz SR, Shams L (2009) Bayesian priors are encoded independently from likelihoods in human multisensory perception. J Vis 9:23–23. doi:10.1167/9.5.23
    OpenUrlAbstract/FREE Full Text
  5. ↵
    Brayanov JB, Press DZ, Smith MA (2012) Motor memory is encoded as a gain-field combination of intrinsic and extrinsic action representations. J Neurosci 32:14951–14965. doi:10.1523/JNEUROSCI.1928-12.2012
    OpenUrlAbstract/FREE Full Text
  6. ↵
    Carroll TJ, Poh E, de Rugy A (2014) New visuomotor maps are immediately available to the opposite limb. J Neurophysiol 111:2232–2243. doi:10.1152/jn.00042.2014 pmid:24598522
    OpenUrlCrossRefPubMed
  7. ↵
    Cunningham HA, Welch RB (1994) Multiple concurrent visual-motor mappings: implications for models of adaptation. J Exp Psychol Hum Percept Perform 20:987–999. doi:10.1037//0096-1523.20.5.987
    OpenUrlCrossRefPubMed
  8. ↵
    1. Doya K
    , ed (2007) Bayesian brain: probabilistic approaches to neural coding. Cambridge, MA: MIT Press.
  9. ↵
    Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433 . doi:10.1038/415429a
    OpenUrlCrossRefPubMed
  10. ↵
    Fernandes HL, Stevenson IH, Kording KP (2012) Generalization of stochastic visuomotor rotations. PLoS One 7:e43016. doi:10.1371/journal.pone.0043016 pmid:22916198
    OpenUrlCrossRefPubMed
  11. ↵
    Fernandes HL, Stevenson IH, Vilares I, Kording KP (2014) The generalization of prior uncertainty during reaching. J Neurosci 34:11470–11484. doi:10.1523/JNEUROSCI.3882-13.2014 pmid:25143626
    OpenUrlAbstract/FREE Full Text
  12. ↵
    Fetsch CR, Turner AH, DeAngelis GC, Angelaki DE (2009) Dynamic reweighting of visual and vestibular cues during self-motion perception. J Neurosci 29:15601–15612. doi:10.1523/JNEUROSCI.2574-09.2009 pmid:20007484
    OpenUrlAbstract/FREE Full Text
  13. ↵
    Fetsch CR, Gu Y, DeAngelis GC, Angelaki DE (2011) Self-motion perception: multisensory integration in extrastriate visual cortex. In: Sensory cue integration. New York, NY: Oxford University Press.
  14. ↵
    Fetsch CR, DeAngelis GC, Angelaki DE (2013) Bridging the gap between theories of sensory cue integration and the physiology of multisensory neurons. Nat Rev Neurosci 14:429–442 . doi:10.1038/nrn3503
    OpenUrlCrossRefPubMed
  15. ↵
    Flanagan JR, Rao AK (1995) Trajectory adaptation to a nonlinear visuomotor transformation: evidence of motion planning in visually perceived space. J Neurophysiol 74:2174–2178. doi:10.1152/jn.1995.74.5.2174
    OpenUrlCrossRefPubMed
  16. ↵
    Georgopoulos AP, Kalaska JF, Caminiti R, Massey JT (1982) On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex. J Neurosci 2:1527–1537. doi:10.1523/JNEUROSCI.02-11-01527.1982
    OpenUrlAbstract/FREE Full Text
  17. ↵
    Ghahramani Z, Wolpert DM, Jordan MI (1996) Generalization to local remappings of the visuomotor coordinate transformation. J Neurosci 16:7085–7096. doi:10.1523/JNEUROSCI.16-21-07085.1996
    OpenUrlAbstract/FREE Full Text
  18. ↵
    Ghez C, Krakauer J (2000) The organization of movement. In: Principles of neural science, pp 653–674. New York, NY: McGraw-Hill.
  19. ↵
    Ghilardi MF, Gordon J, Ghez C (1995) Learning a visuomotor transformation in a local area of work space produces directional biases in other areas. J Neurophysiol 73:2535–2539. doi:10.1152/jn.1995.73.6.2535
    OpenUrlCrossRefPubMed
  20. ↵
    Hayashi T, Yokoi A, Hirashima M, Nozaki D (2016) Visuomotor map determines how visually guided reaching movements are corrected within and across trials. eNeuro 3.
  21. ↵
    Hwang EJ, Smith MA, Shadmehr R (2006) Dissociable effects of the implicit and explicit memory systems on learning control of reaching. Exp Brain Res 173:425–437. doi:10.1007/s00221-006-0391-0
    OpenUrlCrossRefPubMed
  22. ↵
    Imamizu H, Shimojo S (1995) The locus of visual-motor learning at the task or manipulator level: implications from intermanual transfer. J Exp Psychol Hum Percept Perform 21:719–733. pmid:7643045
    OpenUrlCrossRefPubMed
  23. ↵
    Izawa J, Shadmehr R (2008) On-line processing of uncertain information in visuomotor control. J Neurosci 28:11360–11368. doi:10.1523/JNEUROSCI.3063-08.2008 pmid:18971478
    OpenUrlAbstract/FREE Full Text
  24. ↵
    Kersten D, Yuille A (2003) Bayesian models of object perception. Curr Opin Neurobiol 13:150–158. pmid:12744967
    OpenUrlCrossRefPubMed
  25. ↵
    Knill DC, Pouget A (2004) The Bayesian brain: the role of uncertainty in neural coding and computation. Trends Neurosci 27:712–719. doi:10.1016/j.tins.2004.10.007 pmid:15541511
    OpenUrlCrossRefPubMed
  26. ↵
    1. Knill DC,
    2. Richards W
    , eds (1996) Perception as Bayesian inference. New York, NY: Cambridge University Press.
  27. ↵
    Körding KP, Wolpert DM (2004) Bayesian integration in sensorimotor learning. Nature 427:244–247. doi:10.1038/nature02169 pmid:14724638
    OpenUrlCrossRefPubMed
  28. ↵
    Krakauer JW (2009). Motor learning and consolidation: the case of visuomotor rotation. In: Progress in motor control, pp 405–421. Boston, MA: Springer.
  29. ↵
    Krakauer JW, Ghilardi MF, Ghez C (1999) Independent learning of internal models for kinematic and dynamic control of reaching. Nat Neurosci 2:1026–1031. doi:10.1038/14826
    OpenUrlCrossRefPubMed
  30. ↵
    Krakauer JW, Pine ZM, Ghilardi MF, Ghez C (2000) Learning of visuomotor transformations for vectorial planning of reaching trajectories. J Neurosci 20:8916–8924. pmid:11102502
    OpenUrlAbstract/FREE Full Text
  31. ↵
    Krakauer JW, Ghez C, Ghilardi MF (2005) Adaptation to visuomotor transformations: consolidation, interference, and forgetting. J Neurosci 25:473–478. doi:10.1523/JNEUROSCI.4218-04.2005
    OpenUrlAbstract/FREE Full Text
  32. ↵
    Leow LA, Gunn R, Marinovic W, Carroll TJ (2017) Estimating the implicit component of visuomotor rotation learning by constraining movement preparation time. J Neurophysiol 118:666–676.
    OpenUrlCrossRefPubMed
  33. ↵
    Ma WJ, Pouget A (2008) Linking neurons to behavior in multisensory perception: a computational review. Brain Res 1242:4–12. doi:10.1016/j.brainres.2008.04.082
    OpenUrlCrossRefPubMed
  34. ↵
    Ma WJ, Beck JM, Latham PE, Pouget A (2006) Bayesian inference with probabilistic population codes. Nat Neurosci 9:1432–1438 . doi:10.1038/nn1790
    OpenUrlCrossRefPubMed
  35. ↵
    Malfait N, Ostry DJ (2004) Is interlimb transfer of force-field adaptation a cognitive response to the sudden introduction of load? J Neurosci 24:8084–8089. doi:10.1523/JNEUROSCI.1742-04.2004
    OpenUrlAbstract/FREE Full Text
  36. ↵
    Malfait N, Shiller DM, Ostry DJ (2002) Transfer of motor learning across arm configurations. J Neurosci 22:9656–9660. pmid:12427820
    OpenUrlAbstract/FREE Full Text
  37. ↵
    McDougle SD, Bond KM, Taylor JA (2015) Explicit and implicit processes constitute the fast and slow processes of sensorimotor learning. J Neurosci 35:9568–9579. doi:10.1523/JNEUROSCI.5061-14.2015
    OpenUrlAbstract/FREE Full Text
  38. ↵
    McGuire LM, Sabes PN (2009) Sensory transformations and the use of multiple reference frames for reach planning. Nat Neurosci 12:1056–1061 . doi:10.1038/nn.2357
    OpenUrlCrossRefPubMed
  39. ↵
    Moran DW, Schwartz AB (1999) Motor cortical representation of speed and direction during reaching. J Neurophysiol 82:2676–2692. doi:10.1152/jn.1999.82.5.2676
    OpenUrlCrossRefPubMed
  40. ↵
    Paz R, Vaadia E (2009). Learning from learning: what can visuomotor adaptations tell us about the neuronal representation of movement? In: Progress in motor control, pp 221–242. Boston, MA: Springer.
  41. ↵
    Poggio T, Bizzi E (2004) Generalization in vision and motor control. Nature 431:768. doi:10.1038/nature03014 pmid:15483597
    OpenUrlCrossRefPubMed
  42. ↵
    Poh E, Carroll TJ, Taylor JA (2016) Effect of coordinate frame compatibility on the transfer of implicit and explicit learning across limbs. J Neurophysiol 116:1239–1249. doi:10.1152/jn.00410.2016
    OpenUrlCrossRefPubMed
  43. ↵
    Pouget A, Beck JM, Ma WJ, Latham PE (2013) Probabilistic brains: knowns and unknowns. Nat Neurosci 16:1170–1178 . doi:10.1038/nn.3495 pmid:23955561
    OpenUrlCrossRefPubMed
  44. ↵
    Rao RP (2004) Bayesian computation in recurrent neural circuits. Neural Comput 16:1–38. pmid:15006021
    OpenUrlCrossRefPubMed
  45. ↵
    Rao RP, Olshausen BA, Lewicki MS (2002) Probabilistic models of the brain: perception and neural function. Cambridge, MA: MIT Press.
  46. ↵
    Rotella MF, Nisky I, Koehler M, Rinderknecht MD, Bastian AJ, Okamura AM (2015) Learning and generalization in an isometric visuomotor task. J Neurophysiol 113:1873–1884. doi:10.1152/jn.00255.2014 pmid:25520430
    OpenUrlCrossRefPubMed
  47. ↵
    Saijo N, Gomi H (2010) Multiple motor learning strategies in visuomotor rotation. PLoS One 5:e9399. doi:10.1371/journal.pone.0009399 pmid:20195373
    OpenUrlCrossRefPubMed
  48. ↵
    Sainburg RL, Wang J (2002) Interlimb transfer of visuomotor rotations: independence of direction and final position information. Exp Brain Res 145:437–447. doi:10.1007/s00221-002-1140-7
    OpenUrlCrossRefPubMed
  49. ↵
    Sato Y, Körding KP (2014) How much to trust the senses: likelihood learning. J Vis 14:13. doi:10.1167/14.13.13 pmid:25398975
    OpenUrlAbstract/FREE Full Text
  50. ↵
    Scott SH, Gribble PL, Graham KM, Cabel DW (2001) Dissociation between hand motion and population vectors from neural activity in motor cortex. Nature 413:161. doi:10.1038/35093102
    OpenUrlCrossRefPubMed
  51. ↵
    Sergio LE, Hamel-Pâquet C, Kalaska JF (2005) Motor cortex neural correlates of output kinematics and kinetics during isometric-force and arm-reaching tasks. Journal of neurophysiology 94:2353–2378. pmid:15888522
    OpenUrlCrossRefPubMed
  52. ↵
    Shadmehr R (2004) Generalization as a behavioral window to the neural mechanisms of learning internal models. Hum Mov Sci 23:543–568. doi:10.1016/j.humov.2004.04.003
    OpenUrlCrossRefPubMed
  53. ↵
    Shadmehr R, Mussa-Ivaldi FA (1994) Adaptive representation of dynamics during learning of a motor task. J Neurosci 14:3208–3224. pmid:8182467
    OpenUrlAbstract/FREE Full Text
  54. ↵
    Sober SJ, Sabes PN (2003) Multisensory integration during motor planning. J Neurosci 23:6982–6992. doi:10.1523/JNEUROSCI.23-18-06982.2003
    OpenUrlAbstract/FREE Full Text
  55. ↵
    Sober SJ, Sabes PN (2005) Flexible strategies for sensory integration during motor planning. Nat Neurosci 8:490–497 . doi:10.1038/nn1427
    OpenUrlCrossRefPubMed
  56. ↵
    Stocker AA, Simoncelli EP (2006) Noise characteristics and prior expectations in human visual speed perception. Nat Neurosci 9:578–585 . doi:10.1038/nn1669
    OpenUrlCrossRefPubMed
  57. ↵
    Tassinari H, Hudson TE, Landy MS (2006) Combining priors and noisy visual cues in a rapid pointing task. J Neurosci 26:10154–10163. doi:10.1523/JNEUROSCI.2779-06.2006 pmid:17021171
    OpenUrlAbstract/FREE Full Text
  58. ↵
    Taylor JA, Ivry RB (2013) Implicit and explicit processes in motor learning. Action Sci 63–87.
  59. ↵
    Taylor JA, Wojaczynski GJ, Ivry RB (2011) Trial-by-trial analysis of intermanual transfer during visuomotor adaptation. J Neurophysiol 106:3157–3172. doi:10.1152/jn.01008.2010
    OpenUrlCrossRefPubMed
  60. ↵
    Taylor JA, Krakauer JW, Ivry RB (2014) Explicit and implicit contributions to learning in a sensorimotor adaptation task. J Neurosci 34:3023–3032. doi:10.1523/JNEUROSCI.3619-13.2014 pmid:24553942
    OpenUrlAbstract/FREE Full Text
  61. ↵
    Thoroughman KA, Shadmehr R (2000) Learning of action through adaptive combination of motor primitives. Nature 407:742–747 . doi:10.1038/35037588 pmid:11048720
    OpenUrlCrossRefPubMed
  62. ↵
    van Beers RJ, Sittig AC, van der Gon Denier JJ (1996) How humans combine simultaneous proprioceptive and visual position information. Exp Brain Res 111:253–261. doi:10.1007/BF00227302
    OpenUrlCrossRefPubMed
  63. ↵
    van Beers RJ, Sittig AC, van der Gon Denier JJ (1999) Integration of proprioceptive and visual position-information: an experimentally supported model. J Neurophysiol 81:1355–1364. doi:10.1152/jn.1999.81.3.1355
    OpenUrlCrossRefPubMed
  64. ↵
    Vetter P, Goodbody SJ, Wolpert DM (1999) Evidence for an eye-centered spherical representation of the visuomotor map. J Neurophysiol 81:935–939. doi:10.1152/jn.1999.81.2.935 pmid:10036291
    OpenUrlCrossRefPubMed
  65. ↵
    Wang J, Sainburg RL (2003) Mechanisms underlying interlimb transfer of visuomotor rotations. Exp Brain Res 149:520–526. doi:10.1007/s00221-003-1392-x
    OpenUrlCrossRefPubMed
  66. ↵
    Wang J, Sainburg RL (2004) Limitations in interlimb transfer of visuomotor rotations. Exp Brain Res 155:1–8. doi:10.1007/s00221-003-1691-2
    OpenUrlCrossRefPubMed
  67. ↵
    Wang J, Sainburg RL (2005) Adaptation to visuomotor rotations remaps movement vectors, not final positions. J Neurosci 25:4024–4030. doi:10.1523/JNEUROSCI.5000-04.2005
    OpenUrlAbstract/FREE Full Text
  68. ↵
    Wei K, Körding K (2010) Uncertainty of feedback and state estimation determines the speed of motor adaptation. Front Comput Neurosci 4:11.
    OpenUrlCrossRefPubMed
  69. ↵
    Weiss Y, Simoncelli EP, Adelson EH (2002) Motion illusions as optimal percepts. Nat Neurosci 5:598–604 . doi:10.1038/nn858
    OpenUrlCrossRefPubMed
  70. ↵
    Wigmore V, Tong C, Flanagan JR (2002) Visuomotor rotations of varying size and direction compete for a single internal model in a motor working memory. J Exp Psychol Hum Percept Perform 28:447–457 . doi:10.1037//0096-1523.28.2.447
    OpenUrlCrossRefPubMed
  71. ↵
    Wolpert DM, Ghahramani Z, Jordan MI (1995) Are arm trajectories planned in kinematic or dynamic coordinates? An adaptation study. Exp Brain Res 103:460–470. pmid:7789452
    OpenUrlPubMed
  72. ↵
    Wu HG, Smith MA (2013) The generalization of visuomotor learning to untrained movements and movement sequences based on movement vector and goal location remapping. J Neurosci 33:10772–10789. doi:10.1523/JNEUROSCI.3761-12.2013
    OpenUrlAbstract/FREE Full Text

Synthesis

Reviewing Editor: Li Li, New York University Shanghai

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Iris Vilares.

Both reviewers believe the ms addressed an important question in the field. However, they both raised concerns that should be addressed before the ms can be considered for publication.

Reviewer 1:

In the present study, the authors studied inter-manual transfer of learned visuomotor adaptation under the Bayesian integration framework first proposed by Körding and Wolpert (2004). First, the authors largely replicated original results of Körding and Wolpert 2004, using a different experimental apparatus. Then, they further found that the learned adaptation skill can transfer from right arm to left arm with comparable accuracy, and that such comparable accuracy only occurred for the extrinsic coordinates (same adaptation direction) but not for the intrinsic coordinates (different adaptation direction).

Overall, this study made a novel contribution to visuomotor adaptation using Bayesian framework, and experimental design and data analysis are clear and convincing.

General comments:

1. The authors mentioned the KINARM robot was utilized in the footnote. More rationale may be needed as of why such device can achieve similar result as the optical tracking system used in Körding and Wolpert 2004.

2. This work is based on a large number of experimental trials: 1080 for all training stages. Therefore, besides the perturbation prior, other aspects of the motor system may have also been learned and consolidated in the training process, such as sensory likelihood. It would be better to clarify in Discussion.

3. The authors discussed the use of Late and Early stages in Results of Experiment 2, but logically this may also be stated earlier in the experimental design section. These stages are important for the “early transfer” which is an important setting of this work, and therefore the authors may clearly and explicitly explain the purpose of designing such special stages.

Specific comments:

1. Figure caption is missing for Figure 2.

2. “bayesian”in the title should be capitalized in the initial letter.

3. Page 6, main text, line 3, x cm

4. Page 8, main text, line -6, 3:1`:1:1

5. Page 18, line 1-2, missing reference for Bayesian Estimation Model and Full Compensation and Minimal Mapping Model (Körding and Wolpert 2004?).

Reviewer 2:

In this paper the authors set out to: 1) Replicate the key findings of Kording & Wolpert (2004) in terms of how people perform in a target-shift visuomotor task (if Bayesian or not); and 2) Understand if learning in a visuomotor task generalizes from one limb to the other and, if yes, if it does so in intrinsic and/or extrinsic coordinates.

They were able to replicate the key findings of Kording & Wolpert, and also found that visuomotor learning in one limb can generalize to the other limb. Furthermore, the consistent generalization just happened when the perturbations imposed in the task followed an extrinsic reference frame, suggesting that learning in the task occurred in extrinsic coordinates.

The questions in the paper are pertinent and well defined, and the methods appropriate to address them. The paper is well-written and generally clear. I particularly enjoyed the methods section and the (generally) clear explanations provided therein. I have, though, a few suggestions/concerns that I believe could be addressed before the paper gets published:

Major:

- My main concern is with the number of participants per experiment (which for some of the experiments was only 7). While these participant numbers are common in the sensory-motor field, and hence the authors should not be penalized by following the field's convention, this limitation should be addressed in the discussion. Furthermore, as it is unlikely that 7 participants will be representative of the whole of human population, I would suggest rephrasing/toning down some of the extrapolations written in the text (e.g. in the discussion, I would suggest replacing “We found that the learned prior is available to the untrained limb and is coded in an extrinsic reference frame” with “We found that, in our task, the learned prior was available to the untrained limb and was coded in an extrinsic reference frame”...).

- How were participants recruited? Could this have created some bias?

- Results: In the results of Experiment 2, it says that the mean endpoints during the late RH training were “highly similar”. However, the mean endpoint of the first group (-1.52) was not that similar to the other two groups, and indeed almost significantly different (p = 0.069). While we cannot say that they were significantly different (which given the small sample size is not surprising), it is a bit misleading to state that they are “highly similar”. The same comment applies to the reporting of the comparison of mean endpoints during early LH trials. This could be further discussed;

- There were almost no limitations reported in the discussion section.

Minor:

- Methods: What is the color of the end target (the one participants have to reach)? Is it also red/blue? And what is the color of the cloud of feedback dots? The text refers translucent dots but it is unclear what the base color is (maybe white, if we go by Fig. 1h ?).

- LH/RH. I assume these stand for left hand (LH)/right hand (RH), but the first time they are used they should be spelt out, with the abbreviation in parenthesis.

- Figure 4: legend not very clear. Namely, in Fig. 4a, what do the error bars stand for? Fig 4b: it refers error bars but there are none. Are the error bars referred here the ones displayed in Fig. 4a? If yes then the explanation should go in the legend of 4a, not 4b, as it can be confusing otherwise. Fig 4b should have error bars nevertheless (associated with the slopes).

- Fig. 5 and Fig. 6: could benefit from additional editing to be clearer/more visually appealing (i.e. to be more like Figure 3 and 4).

- When was the additional Baseline Task done in 2 of the subjects performed, and did these participants also performed the test phase with the LH?

Back to top

In this issue

eneuro: 5 (4)
eNeuro
Vol. 5, Issue 4
July/August 2018
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Interlimb Generalization of Learned Bayesian Visuomotor Prior Occurs in Extrinsic Coordinates
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Interlimb Generalization of Learned Bayesian Visuomotor Prior Occurs in Extrinsic Coordinates
Christopher L. Hewitson, Paul F. Sowman, David M. Kaplan
eNeuro 23 July 2018, 5 (4) ENEURO.0183-18.2018; DOI: 10.1523/ENEURO.0183-18.2018

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Interlimb Generalization of Learned Bayesian Visuomotor Prior Occurs in Extrinsic Coordinates
Christopher L. Hewitson, Paul F. Sowman, David M. Kaplan
eNeuro 23 July 2018, 5 (4) ENEURO.0183-18.2018; DOI: 10.1523/ENEURO.0183-18.2018
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Abstract
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • Bayesian integration
  • interlimb generalization
  • Motor learning
  • sensorimotor learning
  • transfer
  • Visuomotor adaptation

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Confirmation

  • Supramodal representation of the sense of body ownership in the human parieto-premotor and extrastriate cortices
  • Evaluating the Burstlet Theory of Inspiratory Rhythm and Pattern Generation
  • Sex and Individual Differences in Alcohol Intake Are Associated with Differences in Ketamine Self-Administration Behaviors and Nucleus Accumbens Dendritic Spine Density
Show more Confirmation

Sensory and Motor Systems

  • Supramodal representation of the sense of body ownership in the human parieto-premotor and extrastriate cortices
  • Nonspiking Interneurons in the Drosophila Antennal Lobe Exhibit Spatially Restricted Activity
  • Neonatal Deafening Selectively Degrades the Sensitivity to Interaural Time Differences of Electrical Stimuli in Low-Frequency Pathways in Rats
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2023 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.