Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Sensory and Motor Systems

Visual Uncertainty Unveils the Distinct Role of Haptic Cues in Multisensory Grasping

Ivan Camponogara and Robert Volcic
eNeuro 31 May 2022, 9 (3) ENEURO.0079-22.2022; DOI: https://doi.org/10.1523/ENEURO.0079-22.2022
Ivan Camponogara
1Division of Science, New York University Abu Dhabi, Abu Dhabi, 129188, United Arab Emirates
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Robert Volcic
1Division of Science, New York University Abu Dhabi, Abu Dhabi, 129188, United Arab Emirates
2Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, 129188, United Arab Emirates
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit regardless of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of nonvisual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.

  • grasping
  • haptics
  • multisensory integration
  • peripheral vision
  • visual uncertainty

Significance Statement

The longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here, we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that nonvisual inputs serve an important, previously underappreciated, functional role in grasping.

Introduction

A large proportion of grasping actions are directed toward objects we can sense with multiple modalities. For instance, when grasping with one hand an object we already hold in the other hand, the properties of the object, such as its size and position in space, are provided by both vision and haptics (touch and proprioception). The integration of these redundant sensory cues fosters a consistently superior grasping performance compared with when movements are guided by each modality alone (Camponogara and Volcic, 2019a,b). Even more intriguingly, the same superior grasping performance is achieved when the haptic size cue is not provided and vision is complemented by only the haptic position cue (Camponogara and Volcic, 2021b).

The elusive effect of the haptic size cue in the multisensory integration process might result from two different causes. The superior performance in multisensory grasping might arise from the visual and haptic integration at the level of the position cues only which would reduce the uncertainty about the position of the object in space (Carey and Allan, 1996; Battaglia et al., 2010; Sperandio et al., 2013; Chen et al., 2018). As a consequence, the object size estimation would be solely determined by vision (Camponogara and Volcic, 2021b). Alternatively, the visuo-haptic integration might occur both at the level of the position cues and at the level of size cues, but the dominance of the more reliable visual size cue would completely overshadow the haptic size cue, making it hard to determine whether the multisensory size information is truly integrated.

The main aim of this study was to contrast these two alternative explanations by disrupting visual information during multisensory grasping. The quality of visual information was manipulated by modulating the participants’ gaze direction and, by this, the grasping actions were executed in either central (foveal) or peripheral vision. Because visual acuity sharply declines with retinal eccentricity (Strasburger et al., 2011; Rosenholtz, 2016), object’s size and position estimates are noticeably impaired in peripheral compared with central vision (Collier, 1931; Newsome, 1972; Schneider et al., 1978; Thompson and Fowler, 1980; Bock, 1993; Goodale and Murphy, 1997; Brown et al., 2005; Baldwin et al., 2016). Moreover, multisensory integration studies in perception have shown that as the quality of visual information gradually declines, the object size estimation shifts toward more haptically-based perceptual judgments (Derrick and Dewar, 1970; Heller, 1983; Ernst and Banks, 2002; Gepshtein and Banks, 2003; Helbig and Ernst, 2007; Van Doorn et al., 2010). It might be thus expected that increasing visual uncertainty through peripheral vision should let the haptic size cue effect emerge also in conditions of multisensory grasping.

With respect to movements in central vision, grasping movements in peripheral vision are generally slower, with larger grip apertures and with a poorer grip aperture scaling (Sivak and MacKenzie, 1990, 1992; Goodale and Murphy, 1997; Watt et al., 2000; Brown et al., 2005; Schlicht and Schrater, 2007; Hesse et al., 2012). Introducing additional haptic cues might thus refine grasping movements in several ways depending on the contribution of haptic position and size cues. The integration of the haptic position cue would reduce the overall positional uncertainty, which would translate into faster movements and narrower grip apertures. Analogously, the contribution of the haptic size cue would diminish the uncertainty relative to the object size and would be revealed by an improved grip aperture scaling. However, if the haptic size cue is not part of the integration process, the sensitivity to changes in object size should remain unaffected.

We tested these predictions in two experiments. In the first experiment, we contrasted grasping performance under peripheral vision conditions, with (pVH) or without (pV) additional haptic cues, along with the central vision conditions (V, VH) and a haptic only (H) condition. In the second experiment, we further teased apart the contribution of haptic cues when grasping handheld objects in peripheral vision by selectively withdrawing the haptic size cue and providing the haptic position cue only (pVHP).

Experiment 1

Materials and methods

Participants

Eighteen participants took part in this experiment (four male, age 25.3 ± 8.2). All had normal or corrected-to-normal vision and no known history of neurologic disorders. All of the participants were naive to the purpose of the experiment and were provided with a subsistence allowance. The experiment was undertaken with the understanding and informed written consent of each participant and the experimental procedures were approved by the Institutional Review Board of New York University Abu Dhabi.

Apparatus

The set of stimuli consisted of three 3D-printed rectangular cuboids with depths of 40, 50, 60 mm, all the same height (120 mm) and width (25 mm). A chin rest was positioned at the edge of the experimental table and its height was adjusted such that the participants’ eyes were 440 mm above the table surface. During the experiment the three target objects were positioned 350 mm in the sagittal direction with respect to the table’s edge. Thus, in the peripheral vision condition, the top of the objects was at ∼45° of eccentricity with respect to the participants’ gaze (Fig. 1A). This eccentricity allowed to increase the visual uncertainty without completely eliminating the availability of visual cues (Goodale and Murphy, 1997; Schlicht and Schrater, 2007). A custom-made eye-tracker was attached to the left rod of the chin rest with a locking arm (JB01291-BWW). The eye-tracker consisted of a modified webcam (Vivitar V49252), with a sampling frequency of 30 Hz. An array of 25 infrared LEDs was positioned on the table 40 cm far and 30 cm to the left of the participant. The activation and deactivation of LEDs was controlled by an Arduino Yún board via MATLAB (MathWorks Inc) by a custom program, which also computed the pupil coordinates from the sampled eye images. The start position of the right hand was defined by a 5 mm high rubber bump with a diameter of 9 mm attached at the edge of the table, 450 mm to the right of the participants’ mid-line. The experiment was conducted in a dark room with the experimental table illuminated by a LED desk light (5W) positioned on the left side of the participant.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

A, Experimental setup. Participant’s head was resting on a chin rest. In the Haptic condition (H), participants were blindfolded. In the Visual (V) and Visuo-Haptic (VH) conditions, the participant’s gaze was directed toward the object. In the Visual-Peripheral (pV) and Visuo-Haptic-Peripheral (pVH) conditions, the participant was fixating a small white square on the frontoparallel board and thus the object was in peripheral vision at ∼45° of eccentricity with respect to the participant’s gaze direction. B, Representation of the task in each condition. The grasping action was always performed with the right hand. In H and VH, the participant was already holding the object with the left hand before the start of the grasping action. In V, only vision was available. The pV and pVH conditions were identical to the V and VH conditions, but the object was seen in peripheral vision.

A black panel (600 mm wide, 500 mm high) was positioned 450 mm far from the participants’ position (i.e., behind the object). A small white square (5 × 5 mm) was positioned at the center of the panel, at a height of 440 mm, and acted as the fixation point in the peripheral vision block. A cardboard panel (400 mm wide, 300 mm high) was used to prevent vision of the workspace (but not of the board with the fixation point) between trials in the central and peripheral blocks, whereas a pair of occlusion goggles was used to prevent vision in the Haptic condition (Red Scientific). A pure tone of 1000 Hz, 100-ms length was used to signal the start of the trial, while a tone of 600 Hz with the same length was used to signal its end.

Index, thumb, and wrist movements were acquired on-line at 200 Hz with submillimeter resolution by using an Optotrak Certus system (Northern Digital Inc.). The position of the tip of each digit was calculated during the system calibration phase with respect to two rigid bodies defined by three infrared-emitting diodes attached on each distal phalanx (Nicolini et al., 2014). An additional marker was attached on the styloid process of the radius to monitor the movement of the wrist. The Optotrak system was controlled by the MOTOM toolbox (Derzsi and Volcic, 2018).

Procedure

Participants sat comfortably at the table with their torso touching its edge. All the trials started with the thumb and index digit of the right hand positioned on the start position, the left hand positioned on the left side of the chin rest and the head on the chin rest (Fig. 1A). The height of the chair was adjusted to keep the eyes at a fixed height to maintain the object at a fixed visual angle. Participants were required to perform a precision grip with their right thumb and index digit along the depth axis of the stimulus.

Before each trial, the cardboard panel was placed in front of the participant to cover the workspace, and the object was placed in its position 350 mm in front of the participant. The experimenter then removed the cardboard panel and after a variable period the start tone was delivered. The participant had to perform a right-handed reach-to-grasp action toward the object at a natural speed. No reaction time constrains were imposed. Three seconds after the start tone, the end sound was delivered, and the participant had to move the right hand back to the start position. The cardboard was then placed in front of the participant, the object was set to the new required size and the next trial started.

Five different conditions (Fig. 1B) were performed: Haptic (H), Visual (V), Visuo-Haptic (VH), Peripheral Vision (pV), and Peripheral Vision plus Haptic (pVH). In the H condition, vision was prevented for the whole duration of the condition. Before each trial, the experimenter signaled to the participant to hold the object with their left hand along its depth axis at its base (i.e., sense its size and position by means of touch and proprioception). In the V condition, as soon as the cardboard was removed the experimenter instructed the participant to look at the object which was in the central visual field (the left hand was kept on the table close to the chin rest). In the VH condition, the participant had to hold the object at its base with their left hand and look at the object. The pV and pVH conditions were identical to the V and VH conditions except that participants were instructed to look at the fixation point instead of foveating the object, so that the target object was always in visual periphery (Fig. 1A). Whereas in the pV condition only peripheral vision was available, in the pVH condition, participants were asked to also hold the object at its base with their left hand. Eye fixations in these two conditions were monitored with the eye-tracker, which started sampling as soon as the experimenter placed the cardboard panel between the participant and the object (the cardboard height was lower than the fixation point, but high enough to cover the target object), and stopped when the end of the trial sound was delivered. If the algorithm detected an eye movement of ∼10 mm (∼1.3° of visual angle) in the horizontal or vertical direction from the fixation point, the trial was discarded and repeated later in the condition. The five conditions were divided in two main experimental blocks. The H, V, and VH conditions were part of the Central vision block, whereas the pV and pVH conditions were part of the Peripheral vision block.

The Central and Peripheral vision blocks were performed in sequence, while the order of the conditions (H, V, VH, pV, and pVH) was randomized within blocks and across participants. The differently sized objects were presented in a random order and ten repetitions were performed for each object size and condition, which led to a total of 150 trials per participant. In order to get accustomed with the task, participants underwent a training session of ten trials before each condition, for a total of 50 trials.

Data analysis

Kinematic data were analyzed in R (R Core Team, 2020). The raw data were smoothed and differentiated with a third-order Savitzky–Golay filter with a window size of 21 points. These filtered data were then used to compute velocities and accelerations in three-dimensional space for each digit and the wrist. Movement onset was defined as the moment of the lowest, nonrepeating wrist acceleration value before the continuously increasing wrist acceleration values (Volcic and Domini, 2016; Camponogara and Volcic, 2019b), while the end of the grasping movement was defined on the basis of the Multiple Sources of Information method (Schot et al., 2010). We used the criteria that the grip aperture is close to the size of the object, that the grip aperture is decreasing, that the second derivative of the grip aperture is positive, and that the velocities of the wrist, thumb and index finger are low. Moreover, the probability of a moment being the end of the movement decreased over time to capture the first instance in which the above criteria were met. Trials in which the end of the movement was not captured correctly or in which the missing marker samples could not be reconstructed using interpolation were discarded from further analysis, the exclusion of these trials (158 trials, 5.8% in total) left us with 2542 trials.

We focused our analyses on two dependent variables: the peak grip aperture, defined as the maximum Euclidean distance between the thumb and the index finger, and, the peak velocity of the hand movement, defined as the highest wrist velocity along the movement. We analyzed the data using Bayesian linear mixed-effects models, estimated using the brms package (Bürkner, 2017) which implements Bayesian multilevel models in R using the probabilistic programming language Stan (Carpenter et al., 2017). The models included as fixed-effects (predictors) the categorical variable Condition (H, V, VH, pV, and pVH) in combination with the continuous variable Size. This latter was centered before being entered in the models, thus, the estimates of the Condition parameters (βCondition) correspond to the average performance of each Condition. The estimates of the parameter Size (βSize) correspond instead to the change in the dependent variables as a function of the object size. All models included independent random (group-level) effects for subjects. Models were fitted considering weakly informative prior distributions for each parameter to provide information about their plausible scale. We used Gaussian priors for the Condition fixed-effect predictor (peak grip aperture βCondition: mean = 90 and SD = 40; peak velocity βCondition: mean = 1100 and SD = 200). For the Size fixed-effect predictors we used a Cauchy prior distribution centered at 0 with a scale parameter of 2.5. For the group-level standard deviation parameters and sigmas we used Student t-distribution priors (peak grip aperture all SD parameters and sigma: df = 3, scale = 10; peak velocity all SD parameters and sigma: df = 3, scale = 170). Finally, we set a prior over the correlation matrix that assumes that smaller correlations are slightly more likely than larger ones (LKJ prior set to 2).

For each model we ran four Markov chains simultaneously, each for 16,000 iterations (1000 warm-up samples to tune the MCMC sampler) with the delta parameter set to 0.9 for a total of 60,000 postwarm-up samples. Chain convergence was assessed using the R̂ statistic (all values equal to 1) and visual inspection of the chain traces. Additionally, predictive accuracy of the fitted models was estimated with leave-one-out cross-validation by using the Pareto Smoothed Importance Sampling. All Pareto k values were below 0.5.

The posterior distributions we have obtained represent the probabilities of the parameters conditional on the priors, model, and data, and they represent our belief that the “true” parameter lies within some interval with a given probability. We summarize these posterior distributions by computing the medians and the 95% highest density intervals (HDIs). The 95% HDI specifies the interval that includes with a 95% probability the true value of a specific parameter. To evaluate the differences between parameters of two conditions, we have simply subtracted the posterior distributions of βCondition and βSize weights between specific conditions. The resulting distributions are denoted as the credible difference distributions and are again summarized by computing the medians and the 95% HDIs.

For statistical inferences about the βSize we assessed the overlap of the 95% HDI with zero. A 95% HDI that does not span zero indicates that the predictor has an effect on the dependent variable. For statistical inferences about the differences of the model parameters, βCondition and βSize, between conditions, we applied an analogous approach. A 95% HDI of the credible difference distribution that does not span zero is taken as evidence that the model parameters in the two conditions differ from each other. Data and codes are available at the following link https://osf.io/dfycg/.

Results and discussion

Based on previous results (Camponogara and Volcic, 2019a,b, 2021b), we predict that the multisensory condition in central vision (VH) should exhibit faster grasping movements with smaller peak grip apertures than the V and H unisensory conditions. Likewise, we expect the peripheral vision conditions (pV, pVH) to show a decline in performance with respect to their corresponding central vision conditions (V, VH), because peripheral vision is characterized by a higher visual uncertainty. However, two main scenarios are considered for the peripheral vision conditions. If haptic size is largely involved in the control of grasping, we expect faster movements, with narrower peak grip apertures and a better grip aperture scaling in pVH compared with pV. If haptic size does not play a relevant role, we expect actions in pVH to be faster and with narrower peak grip apertures than in pV, but with no improvement in grip aperture scaling, that is, the sensitivity to changes in object size would be equivalent to the pV condition.

We confirmed that movements performed in central vision were faster and with a narrower peak grip aperture in multisensory compared with each unisensory conditions (Fig. 2A,C; Camponogara and Volcic, 2019a,b, 2021b). Interestingly, the same pattern of results was found also in peripheral vision (Fig. 2B,D), confirming that haptics and vision are integrated also when vision is degraded. As expected, actions were slower and were performed with a wider grip aperture in peripheral compared with central vision, in both unisensory and multisensory conditions (V vs pV and VH vs pVH; Fig. 2). Interestingly, while the peak grip aperture scaled similarly in V and VH (Fig. 2C), the scaling was stronger in pVH compared with pV (Fig. 2D), suggesting a different support of haptics when acting in central and peripheral vision.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Top row, Average peak velocity as a function of the object size in the central vision (A) and peripheral vision (B) blocks in experiment 1. Bottom row, Average peak grip aperture as a function of the object size in the central vision (C) and peripheral vision (D) blocks. Error bars represent the SEM. Dotted lines show the Bayesian mixed-effects regression model fits.

Central vision

In central vision, the peak velocity was modulated according to the available sensory information (Fig. 3A), with an advantage of multisensory over unisensory grasping, and of vision over haptics. The peak velocity was credibly higher in VH compared with V and H, and tended to be credibly higher in V compared with H (Fig. 3B). The peak velocity was not affected by changes in object size in any of the conditions, with slope values ranging between –0.1 and –0.65 corresponding to minimal variations in peak velocity between the smallest and the largest object (∼10 mm/s difference equivalent to ∼1% of the average peak velocity).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Experiment 1 results. Left column, Estimates of peak velocity for each condition (A) and their differences (B). Middle column, Estimates of peak grip aperture for each condition (C) and their differences (D). Right column, Estimates of the scaling of peak grip aperture for each condition (E) and their differences (F). The gray areas indicate the estimates and comparisons of the conditions in which the object was in visual periphery. The graphical elements in A, C, E represent the posterior β weights distributions, and the graphical elements in B, D, F represent the credible difference distributions of the Bayesian linear mixed-effects regression models. The white dots show the median, the boxes the 50% HDIs, and the areas between whiskers the 95% HDIs.

The peak grip aperture was also clearly affected by the available sensory inputs (Fig. 3C). Peak grip aperture was credibly smaller in the VH condition compared with the H condition, and in V compared with the H condition (Fig. 3D). Also, the peak grip aperture in VH tended to be smaller than in the V condition. These results replicate previous findings and further corroborate that the simultaneous availability of visual and haptic inputs leads to a multisensory advantage (Camponogara and Volcic, 2019b, 2021b).

The peak grip aperture scaled with object size in all conditions (Fig. 3E). The scaling was equivalent in the VH and V conditions, and stronger compared with the H condition (Fig. 3F). This can be considered as a sign that, in central vision, the peak grip aperture modulation in multisensory grasping is mainly based on the visual size cue, as suggested by previous studies (Camponogara and Volcic, 2021b).

Comparisons between central and peripheral vision

Additional haptic inputs affected both peak velocity and peak grip aperture also in peripheral vision (Fig. 3A,C). As observed for central vision, holding the object with the contralateral hand facilitated faster movements and reduced grip apertures highlighting again the beneficial role of haptics.

The concurrent availability of peripheral vision and haptics enabled faster movements compared with when either of the two modalities was presented in isolation (Fig. 3B, pV–pVH and H–pVH comparisons). As expected, reach-to-grasp actions toward peripherally seen objects were slower than those toward centrally seen objects. The peak velocity was credibly lower in pVH compared with VH, and there was a tendency for a credibly lower peak velocity in pV compared with V (Fig. 3B, pV–V and pVH–VH comparisons).

The peak grip aperture credibly increased when the object was in peripheral compared with central vision, both with or without the support of concurrent haptic information (Fig. 3D, pV–V and pVH–VH comparisons). However, the switch from central to peripheral vision increased peak grip apertures more strongly when the grasping behavior was not supported by additional haptic information (pV–V vs pVH–VH). The effect of adding haptic information to peripheral vision resulted in credibly narrower peak grip apertures (Fig. 3D, pV–pVH comparison), whereas adding peripheral vision to haptics led to only a minor improvement (Fig. 3D, H–pVH comparison).

The availability of concurrent haptic size and position cues also partially prevented the typical worsening of the scaling of the grip aperture when grasping is guided only by peripheral vision (Fig. 3E). Object size scaling was credibly weaker in peripheral compared with central vision (Fig. 3F, pV–V and pVH–VH comparisons), but the scaling of the peak grip aperture was credibly stronger in the pVH condition compared with the pV condition (Fig. 3F, pV–pVH comparison), which was, in turn, identical to the H condition (Fig. 3F, H–pVH comparison). It is interesting to note that while in central vision the grip aperture scaled similarly in the unisensory visual and in the multisensory conditions (Fig. 3F, V–VH), the grip aperture in visual periphery scaled more strongly in the multisensory compared with the unisensory visual condition (Fig. 3F, pV–pVH). This suggests that haptic object position and size information are flexibly used according to the quality of visual information.

Figure 4A summarizes all the conditions in terms of peak velocity, peak grip aperture and the scaling of peak grip aperture as a function of object size. Conditions from worst (larger grip apertures and lower velocity) to best (smaller grip apertures and higher velocity) grasping performance lie along the diagonal line connecting the top-left to the bottom-right corners and are denoted with smaller to larger dot sizes indicating their respective slope of peak grip apertures. Two aspects are again evident here. First, both conditions with peripheral vision (pV and pVH) are inferior to their respective central vision conditions (V and VH). Second, complementing peripheral vision with haptic inputs leads to a superior grasping performance than when actions are guided only by peripheral vision (pVH vs pV). Interestingly, in peripheral vision haptics improved the grip aperture, peak velocity and the overall scaling of the peak grip aperture to a higher extent than in central vision.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Summary of experiment 1 results. A, Relationship between the peak grip aperture and the peak velocity. The areas of the dots represent the slope of the peak grip aperture as a function of the object size (the higher the slope the larger the dot). B, Relationship between the grip aperture and the wrist velocity from the start to the end of the movement. The lines representing each condition were obtained by resampling each movement trajectory in 201 steps evenly spaced along the three-dimensional path and by then averaging the grip aperture and movement velocity over all participants and sizes for each step of the space-normalized movement trajectory. C, Slope of the grip aperture along the space-normalized trajectory. The slope values and their SEs (shaded regions) were computed by fitting a linear model with the grip aperture as a function of the object size for each step of the space-normalized trajectory. The dots represent the point of the trajectory at which the peak grip aperture occurred.

Figure 4B represents the covariation of the wrist velocity and the grip aperture from the start to the end of the movement. The highest value reached by each curve along the horizontal axis represents the point of the movement trajectory at which the peak velocity occurred, and, similarly, the highest value of each curve along the vertical axis represents the peak grip aperture. Just after movement start, the curves clustered into two groups, one including the conditions with haptic information (H, VH, and pVH) and one including those without haptics (V and pV); a sign that the initial movement velocity and grip aperture in multisensory conditions were mainly under haptic control. These groups dissolved before the curves reached the peak velocity and the evolution of each curve was affected by the available sensory information. In contrast, the curves representing the changes in the scaling of the peak grip aperture formed three groups of conditions which stayed separated until movement end (Fig. 4C). The slopes were similar between H and pVH, and between V and VH, with flatter slopes for the first (H, pVH) than for the second group (V, VH). Instead, the pV condition showed a distinct slope profile with very weak scaling which persisted almost until movement end.

Experiment 2

The results of experiment 1 show that, as for central vision, actions toward handheld objects in peripheral vision are performed faster and with narrower grip apertures than those toward only (peripherally) seen objects. This suggests that visual and haptic inputs are successfully integrated even when vision is disrupted. However, the partially restored grip aperture scaling observed in peripheral multisensory grasping could have two different origins that either incorporate haptic size cues or not. If the haptic size cue is critical for hand shaping in peripheral multisensory grasping, we expect that its removal would resemble the peak grip aperture and its scaling observed in the pV condition. Instead, if the hand shaping is mainly determined by visual size cues which are improved by the availability of haptic positional information, as seen in central vision (Camponogara and Volcic, 2021b), the haptic position cue should be sufficient to attain the same level of peak grip aperture and its scaling as when all haptic cues are provided. As long as the haptic position cue is available, the presence or absence of the haptic size cue should not affect peak velocities, which should be higher than when only peripheral vision is available. To tease apart the relative contribution of these haptic inputs, we systematically manipulated the haptic size availability. In the Peripheral Vision plus Haptic Position condition (pVHP), we introduced a new set of objects which were identical to those used in experiment 1, but had the lower half replaced by a post which did not co-vary with the size of the objects (Fig. 5A). Thus, in the pVHP condition, participants were holding the post with their left hand (Fig. 5B), which provided only haptic positional but no relevant size information, while simultaneously seeing the object in the periphery. This pVHP condition was performed on a new group of participants together with the pV and pVH conditions, which were the same as in experiment 1.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

A, Example of a stimulus used in the Peripheral Vision plus Haptic Position (pVHP) condition of experiment 2. B, Left hand holding the post during the pVHP condition. Average peak velocity (C) and peak grip aperture (D) as a function of the object size in experiment 2. Error bars represent the SEM. Dotted lines show the Bayesian mixed-effects regression model fits.

Materials and methods

Participants

Eighteen new participants took part in experiment 2 (six male, age 20.7 ± 3.5). All had normal or corrected-to-normal vision and no known history of neurologic disorders. All of the participants were naive to the purpose of the experiment and were provided with a subsistence allowance. The experiment was undertaken with the understanding and informed written consent of each participant and the experimental procedures were approved by the Institutional Review Board of New York University Abu Dhabi.

Apparatus

The experimental setup was the same as in experiment 1 (Fig. 1A), except that two set of stimuli were used: the first set was the same as in the first experiment (Fig. 1A), whereas the second set of stimuli consisted of five rectangular cuboids of 60-mm height supported by a 60-mm-high post which was 10 mm deep and 25 mm wide (Fig. 5A). The upper part of these stimuli was identical to the first set of stimuli and thus varied in depth across trials. The post supporting the upper part had instead a fixed depth.

Procedure

The procedure was the same as for the Peripheral block of experiment 1. In the pV condition and the pVH condition the first set of objects was presented (Fig. 1A). In the pVHP condition, the second sets of objects was used (Fig. 5A). In this case, participants held with their left hand the the base of the post that supported the target object (Fig. 5B). Thus, while in the pVH condition haptic inputs were informative of both the object size and position, in the pVHP condition haptic inputs provided only positional object information. Therefore, peripheral vision was the only source of object size information.

The order of the conditions (pV, pVH, pVHP) was randomized across participants. Object sizes were randomized within each condition and 15 trials were performed for each object size and condition, which led to a total of 135 trials per participant. Before each condition, participants underwent a training session of ten trials to get accustomed with the task, for a total of 30 trials.

Data analysis

The raw data processing and the statistical analyses were identical to those of experiment 1. Based on the same exclusion criteria, a total of 276 trials (11.3% in total) were excluded which left us with 2154 trials for the final analysis. As in experiment 1, we focused our analyses on the peak grip aperture and the peak velocity of the hand movement. The R̂ statistic and visual inspection of the chain traces confirmed successful chains convergence. All Pareto k values were below 0.5. As in experiment 1, we report the posterior distribution of the βCondition and βSize for each condition, and contrast the different conditions by computing the differences between the posterior distributions for each predictor. Data and codes are available at the following link https://osf.io/dfycg/.

Results and discussion

Results showed that movements were performed faster and with a narrower grip aperture in the multisensory conditions (pVH, pVHP) compared with the unisensory (pV) condition. Interestingly, movements were equally fast and with a similar grip aperture either with (pVH) or without (pVHP) the haptic size cue (Fig. 5C,D). However, removing the haptic size cue considerably reduced the scaling of the grip aperture, which scaled less than when both the size and position haptic cues were available (Fig. 5D).

Movements supported by haptic inputs were faster than the unisensory visual condition (Fig. 6A) with a credibly higher peak velocity in pVH and in pVHP compared with pV (Fig. 6B). Interestingly, as we have observed for central vision (Camponogara and Volcic, 2021b), no differences in peak velocity were found between the pVH and pVHP conditions confirming that the integration of vision and haptics is mainly concerned with the position of the object. As in experiment 1, peak velocity was insensitive to changes in object size. The variation of the size effect spans the [0, –0.13] range, which corresponds to a variation of the peak velocity of 2.6 mm/s from the smallest to the largest object (∼0.2% of the average peak velocity).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Experiment 2 results. Left column, Estimates of peak velocity for each condition (A) and their differences (B). Middle column, Estimates of peak grip aperture for each condition (C) and their differences (D). Right column, Estimates of the scaling of peak grip aperture for each condition (E) and their differences (F). The graphical elements in A, C, E represent the posterior β weights distributions and the graphical elements in B, D, F represent the credible difference distributions of the Bayesian linear mixed-effects regression models. The white dots show the median, the boxes the 50% HDIs, and the areas between whiskers the 95% HDIs.

The analysis of the peak grip aperture reaffirmed the advantage of multisensory over unisensory conditions (Fig. 6C). Peak grip aperture was credibly larger in the pV condition than in the pVH condition (Fig. 6D). The peak grip aperture was also credibly larger in pV compared with pVHP, and similar between the pVH and pVHP conditions. Most importantly, providing only haptic positional information was not sufficient to accurately scale the grip aperture according to the object size (Fig. 6E). We found that the peak grip aperture increased credibly less as a function of object size in pV and pVHP compared with pVH, and, it was similar between pV and pVHP conditions (Fig. 6F). Thus, in degraded visual conditions the haptic positional information speeds up movements and decreases grip aperture, but haptic size is essential to modulate the grip aperture according to the object size (Fig. 7A). Noticeably, the grip apertures and the movement velocities in pVH and pVHP conditions were almost indistinguishable from the beginning to the end of the movement and clearly separated from the pV condition emphasizing the specific role of the haptic position cue in improving action performance (Fig. 7B). However, as can be seen in Figure 7C, the haptic size cue was crucial to refine the hand shaping around the object by improving the grip aperture scaling along the whole movement trajectory.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Summary of experiment 2 results. A, Relationship between the peak grip aperture and the peak velocity. The areas of the dots represent the slope of the peak grip aperture as a function of the object size (the higher the slope the larger the dot). B, Relationship between the grip aperture and the wrist velocity from the start to the end of the movement. Conditions are represented by the lines obtained by resampling each movement trajectory in 201 steps evenly spaced along the three-dimensional path and by then averaging the grip aperture and movement velocity over all participants and sizes for each step of the movement trajectory. C, Slope of the grip aperture along the space-normalized trajectory. The slope values and their SEs (shaded regions) were computed by fitting a linear model with the grip aperture as a function of the object size for each step of the space-normalized trajectory. The dots represent the point of the trajectory where the peak grip aperture occurred.

Discussion

There are two key findings of the present research. First, we found that the integration of visual and haptic object features for multisensory guided grasping occurs not only when vision is superior to haptics, but also when vision is disrupted to the extent that it becomes the less reliable modality. Second, we found that the integration of vision and haptics for multisensory guided grasping comprises both position and size cues, with the greater benefits gained by the contribution of the haptic position cue.

Visually guided grasping in central vision clearly outperformed haptically guided grasping, but it was severely degraded when vision was only peripheral. Irrespective of the quality of visual information, we have observed pronounced improvements when both vision and haptics were simultaneously available. Multisensory guided movements were faster than movements in the fastest of the unisensory conditions and grip apertures tended to be smaller than the smallest of the unisensory conditions. These findings show that the process of multisensory integration for grasping actions obeys the same rules observed in studies on visuo-haptic reaching (Camponogara and Volcic, 2021a) and visuo-haptic perception (Derrick and Dewar, 1970; Heller, 1983; Ernst and Banks, 2002; Gepshtein and Banks, 2003; Helbig and Ernst, 2007; Wijntjes et al., 2009; Van Doorn et al., 2010). Thus, there is evidence in both perception and action that multisensory integration is not a rigid process in which vision simply dominates over haptics (Rock and Victor, 1964; Hay et al., 1965; Rock and Harris, 1967; Power and Graham, 1976), but it is instead a flexible process balancing the contributions of vision and haptics depending on the quality of each source of information.

With regard to the role of the separate haptic cues, we found that enriching peripheral visual information with only the haptic position cue was sufficient to increase movement velocity and reduce grip aperture as much as when also the haptic size cue was available. It is known that the localization of objects can be strongly impaired when they are placed in visually eccentric (peripheral) positions (Bock, 1993; Henriques et al., 1998; Henriques and Crawford, 2000; Bartolo et al., 2018). This increased positional uncertainty could be the primary cause of the worsened grasping performance usually observed when only peripheral vision is available (Sivak and MacKenzie, 1990, 1992; Goodale and Murphy, 1997; Watt et al., 2000; Brown et al., 2005; Schlicht and Schrater, 2007; Hesse et al., 2012). Our results clearly support the view that visual and haptic position cues are integrated to reduce the overall positional uncertainty, which positively influences the quality of grasping movements even when visual information is severely degraded (Chen et al., 2018; Camponogara and Volcic, 2021b). This does not exclude though that the uncertainty about object size also affects grasping movements.

The role of the haptic size cue was indeed revealed by how the grip aperture scaled according to object size. When both the haptic position and haptic size cues were provided together with peripheral vision, the scaling of the grip aperture improved with respect to the peripheral vision only condition and it was comparable to the scaling observed in the haptics only condition. This could have been an indication that the refined scaling resulted either from a reduced uncertainty about the object size driven by the availability of the haptic size cue, or, from a reduced uncertainty about the object position driven by the availability of the haptic position cue. Our results exclude the latter explanation. Providing only the haptic position cue with peripheral vision was not sufficient to induce the level of scaling observed when also the haptic size cue was available. Thus, the haptic size cue played a necessary role, because its removal, indeed, weakened the scaling of the grip aperture to the level of the peripheral vision only condition. The contributing role of the haptic size cue in reducing the overall size uncertainty is further reinforced by observing the evolution of grip aperture scaling along the whole movement trajectory. Scaling along the trajectory in multisensory peripheral vision conditions was identical to the haptic only condition when the haptic size cue was present and it was identical to the peripheral vision only condition when the haptic size cue was absent.

An additional aspect worth commenting concerns the relationship between the peak grip aperture and its scaling. The fact that peak grip aperture scales reliably with changes in object size (with a slope of ∼0.7) is an established property of normal grasping movements (Marteniuk et al., 1990; Jakobson and Goodale, 1991). It has also been shown that in degraded visual conditions (e.g., by removing visual feedback or by switching from binocular to monocular vision) the peak grip aperture increases and the grip aperture scaling weakens (Churchill et al., 2000; Watt and Bradshaw, 2000; Melmoth and Grant, 2006; Keefe and Watt, 2009; Hesse et al., 2016; Keefe et al., 2019). All our results conform with this behavior except for the multisensory condition in which only the haptic position was provided together with peripheral vision. Here, the grip aperture scaling heavily decreased without the parallel increase of the peak grip aperture. This means that, if needed, the grip aperture and its scaling can be controlled independently according to the demands of a specific situation and can lead to grasping movements of generally higher quality in which collisions with objects are strategically avoided.

The interpretation of the present results is based on the idea that an estimate of object size is necessary for the formation of reach-to-grasp movements. An alternative view, the digit-in-space framework, poses that grasping kinematics follow from the movements of the individual digits toward specific positions in space, which correspond to the grasping points of the digits on the object (Smeets and Brenner, 1999; Verheij et al., 2012; Smeets et al., 2019). Variations in grasping movement execution should thus be expected if haptics, vision and peripheral vision provide estimates of grasping points that differ in accuracy and/or precision. And, when more than one sense is simultaneously available, movement execution should be expected to improve compared with movements guided by each modality. Additionally, when the haptically sensed grasping points are closer to each other than those sensed by vision, the jointly estimated grasping points should be drawn toward the center of the object, making the difference in object sizes appear less distinct than they actually are and these should directly affect the emerging peak grip aperture and its scaling. Thus, the results presented here are also compatible with the digit-in-space framework. However, Camponogara and Volcic (2021b) previously reported an instance in which the results do not seem to be fully captured by this line of reasoning: the observed benefits on grasping movements in central vision were equal regardless of the congruence between the positions of the haptic and visual grasping points. A further element to be considered is that the improvements observed in multisensory grasping could also be a consequence of more effective sensorimotor transformations (Tagliabue and McIntyre, 2014; Kuling et al., 2016, 2017). Future studies will need to single out edge conditions in multisensory grasping for which these views predict different outcomes.

The associations between the visual and the haptic modality are not innate, but rather characterized by a high degree of plasticity. Vision and haptics achieve calibration during development through constant cross-sensory comparisons (Gori et al., 2008). Moreover, studies on cataract-treated participants showed the restoration of visual object recognition (Held et al., 2011; Chen et al., 2016) and the acquisition of multisensory integration (Senna et al., 2021) is possible within a brief period after surgery by exploiting the cross-modal interactions between vision and touch. The haptic and visual recalibration is also visible in adults following visuomotor adaptation tasks (Volcic et al., 2013; Wiesing et al., 2021), which might be related to the strong couplings that exist between the senses and movement control (Steinbach and Held, 1968; Bock, 1987; Maiello et al., 2018). Our results complement these findings and raise the intriguing possibility that the haptic modality available during sensorimotor interactions with the environment could be effective in learning or restoring visuomotor functions during development and throughout the lifespan.

In sum, our results are in clear support of the view that visuo-haptic integration for grasping occurs both at the level of the position cues and at the level of the size cues confirming the hypothesis that, in optimal visual conditions, the effect of the haptic size cue is usually masked by the dominance of the more reliable visual size cue. When vision was disrupted, both haptic position and haptic size cues played a relevant role in shaping the grasping movements. It is, however, important to note that most of the advantages in multisensory grasping stem from the contribution of the haptic position cue. As previously suggested (Camponogara and Volcic, 2021b), a sensorimotor system can achieve greater robustness if it relies on the integration of visuo-haptic object features that systematically co-occur (e.g., position) than on features that can frequently differ between the two sensory modalities because of variations in object shape (e.g., size).

Footnotes

  • The authors declare no competing financial interests.

  • This work was supported by the New York University Abu Dhabi (NYUAD) Research Enhancement Fund Grant RE183 and the NYUAD Center for Artificial Intelligence and Robotics, funded by Tamkeen under the NYUAD Research Institute Award CG010.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Baldwin J, Burleigh A, Pepperell R, Ruta N (2016) The perceived size and shape of objects in peripheral vision. i-Perception 7:2041669516661900. doi:10.1177/2041669516661900 pmid:27698981
    OpenUrlCrossRefPubMed
  2. ↵
    Bartolo A, Rossetti Y, Revol P, Urquizar C, Pisella L, Coello Y (2018) Reachability judgement in optic ataxia: effect of peripheral vision on hand and target perception in depth. Cortex 98:102–113. doi:10.1016/j.cortex.2017.05.013 pmid:28625347
    OpenUrlCrossRefPubMed
  3. ↵
    Battaglia PW, Di Luca M, Ernst MO, Schrater PR, Machulla T, Kersten D (2010) Within-and cross-modal distance information disambiguate visual size-change perception. PLoS Comput Biol 6:e1000697. pmid:20221263
    OpenUrlCrossRefPubMed
  4. ↵
    Bock O (1987) Coordination of arm and eye movements in tracking of sinusoidally moving targets. Behav Brain Res 24:93–100. doi:10.1016/0166-4328(87)90247-6 pmid:3593529
    OpenUrlCrossRefPubMed
  5. ↵
    Bock O (1993) Localization of objects in the peripheral visual field. Behav Brain Res 56:77–84. doi:10.1016/0166-4328(93)90023-J pmid:8397856
    OpenUrlCrossRefPubMed
  6. ↵
    Brown LE, Halpert BA, Goodale MA (2005) Peripheral vision for perception and action. Exp Brain Res 165:97–106. pmid:15940498
    OpenUrlCrossRefPubMed
  7. ↵
    Bürkner PC (2017) brms: an R package for Bayesian multilevel models using Stan. J Stat Soft 80:1–28. doi:10.18637/jss.v080.i01
    OpenUrlCrossRefPubMed
  8. ↵
    Camponogara I, Volcic R (2019a) Grasping adjustments to haptic, visual, and visuo-haptic object perturbations are contingent on the sensory modality. J Neurophysiol 122:2614–2620. doi:10.1152/jn.00452.2019 pmid:31693442
    OpenUrlCrossRefPubMed
  9. ↵
    Camponogara I, Volcic R (2019b) Grasping movements toward seen and handheld objects. Sci Rep 9:3665. doi:10.1038/s41598-018-38277-w pmid:30842478
    OpenUrlCrossRefPubMed
  10. ↵
    Camponogara I, Volcic R (2021a) A brief glimpse at a haptic target is sufficient for multisensory integration in reaching movements. Vision Res 185:50–57. doi:10.1016/j.visres.2021.03.012 pmid:33895647
    OpenUrlCrossRefPubMed
  11. ↵
    Camponogara I, Volcic R (2021b) Integration of haptics and vision in human multisensory grasping. Cortex 135:173–185. pmid:33383479
    OpenUrlPubMed
  12. ↵
    Carey DP, Allan K (1996) A motor signal and “visual” size perception. Exp Brain Res 110:482–486. doi:10.1007/BF00229148 pmid:8871107
    OpenUrlCrossRefPubMed
  13. ↵
    Carpenter B, Gelman A, Hoffman MD, Lee D, Goodrich B, Betancourt M, Brubaker M, Guo J, Li P, Riddell A (2017) Stan: a probabilistic programming language. J Stat Soft 76:1–32. doi:10.18637/jss.v076.i01
    OpenUrlCrossRefPubMed
  14. ↵
    Chen J, Wu ED, Chen X, Zhu LH, Li X, Thorn F, Ostrovsky Y, Qu J (2016) Rapid integration of tactile and visual information by a newly sighted child. Curr Biol 26:1069–1074. doi:10.1016/j.cub.2016.02.065 pmid:27040777
    OpenUrlCrossRefPubMed
  15. ↵
    Chen J, Sperandio I, Goodale MA (2018) Proprioceptive distance cues restore perfect size constancy in grasping, but not perception, when vision is limited. Curr Biol 28:927–932. doi:10.1016/j.cub.2018.01.076
    OpenUrlCrossRef
  16. ↵
    Churchill A, Hopkins B, Rönnqvist L, Vogt S (2000) Vision of the hand and environmental context in human prehension. Exp Brain Res 134:81–89. pmid:11026729
    OpenUrlCrossRefPubMed
  17. ↵
    Collier R (1931) An experimental study of form perception in indirect vision. J Comp Psychol 11:281–290. doi:10.1037/h0075361
    OpenUrlCrossRef
  18. ↵
    Derrick E, Dewar R (1970) Visual-tactual dominance relationship as a function of accuracy of tactual judgment. Percept Mot Skills 31:935–939. pmid:5498198
    OpenUrlCrossRefPubMed
  19. ↵
    Derzsi Z, Volcic R (2018) MOTOM toolbox: Motion Tracking via Optotrak and Matlab. J Neurosci Methods 308:129–134. pmid:30009842
    OpenUrlCrossRefPubMed
  20. ↵
    Ernst MO, Banks MS (2002) Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433. doi:10.1038/415429a
    OpenUrlCrossRefPubMed
  21. ↵
    Gepshtein S, Banks MS (2003) Viewing geometry determines how vision and haptics combine in size perception. Curr Biol 13:483–488. pmid:12646130
    OpenUrlCrossRefPubMed
  22. ↵
    Goodale MA, Murphy KJ (1997) Action and perception in the visual periphery. In: Parietal lobe contributions to orientation in 3D space (Thier P, Karnath HO, eds), pp 447–461. New York: Springer.
  23. ↵
    Gori M, Del Viva M, Sandini G, Burr DC (2008) Young children do not integrate visual and haptic form information. Curr Biol 18:694–698. doi:10.1016/j.cub.2008.04.036
    OpenUrlCrossRefPubMed
  24. ↵
    Hay JC, Pick HL, Ikeda K (1965) Visual capture produced by prism spectacles. Psychon Sci 2:215–216. doi:10.3758/BF03343413
    OpenUrlCrossRef
  25. ↵
    Helbig HB, Ernst MO (2007) Optimal integration of shape information from vision and touch. Exp Brain Res 179:595–606. pmid:17225091
    OpenUrlCrossRefPubMed
  26. ↵
    Held R, Ostrovsky Y, de Gelder B, deGelder B, Gandhi T, Ganesh S, Mathur U, Sinha P (2011) The newly sighted fail to match seen with felt. Nat Neurosci 14:551–553. pmid:21478887
    OpenUrlCrossRefPubMed
  27. ↵
    Heller MA (1983) Haptic dominance in form perception with blurred vision. Perception 12:607–613. doi:10.1068/p120607 pmid:6676712
    OpenUrlCrossRefPubMed
  28. ↵
    Henriques DYP, Crawford JD (2000) Direction-dependent distortions of retinocentric space in the visuomotor transformation for pointing. Exp Brain Res 132:179–194. pmid:10853943
    OpenUrlCrossRefPubMed
  29. ↵
    Henriques DYP, Klier EM, Smith MA, Lowy D, Crawford JD (1998) Gaze-centered remapping of remembered visual space in an open-loop pointing task. J Neurosci 18:1583–1594. doi:10.1523/JNEUROSCI.18-04-01583.1998
    OpenUrlAbstract/FREE Full Text
  30. ↵
    Hesse C, Ball K, Schenk T (2012) Visuomotor performance based on peripheral vision is impaired in the visual form agnostic patient DF. Neuropsychologia 50:90–97. pmid:22085864
    OpenUrlPubMed
  31. ↵
    Hesse C, Miller L, Buckingham G (2016) Visual information about object size and object position are retained differently in the visual brain: evidence from grasping studies. Neuropsychologia 91:531–543. pmid:27663865
    OpenUrlPubMed
  32. ↵
    Jakobson L, Goodale MA (1991) Factors affecting higher-order movement planning: a kinematic analysis of human prehension. Exp Brain Res 86:199–208. pmid:1756790
    OpenUrlCrossRefPubMed
  33. ↵
    Keefe BD, Watt SJ (2009) The role of binocular vision in grasping: a small stimulus-set distorts results. Exp Brain Res 194:435–444. pmid:19198815
    OpenUrlCrossRefPubMed
  34. ↵
    Keefe BD, Suray PA, Watt SJ (2019) A margin for error in grasping: hand pre-shaping takes into account task-dependent changes in the probability of errors. Exp Brain Res 237:1063–1075. doi:10.1007/s00221-019-05489-z
    OpenUrlCrossRef
  35. ↵
    Kuling IA, Brenner E, Smeets JB (2016) Errors in visuo-haptic and haptic-haptic location matching are stable over long periods of time. Acta Psychol (Amst) 166:31–36. pmid:27043253
    OpenUrlCrossRefPubMed
  36. ↵
    Kuling IA, van der Graaff MC, Brenner E, Smeets JB (2017) Matching locations is not just matching sensory representations. Exp Brain Res 235:533–545. doi:10.1007/s00221-016-4815-1
    OpenUrlCrossRef
  37. ↵
    Maiello G, Kwon M, Bex PJ (2018) Three-dimensional binocular eye–hand coordination in normal vision and with simulated visual impairment. Exp Brain Res 236:691–709. pmid:29299642
    OpenUrlCrossRefPubMed
  38. ↵
    Marteniuk RG, Leavitt JL, MacKenzie CL, Athenes S (1990) Functional relationships between grasp and transport components in a prehension task. Hum Mov Sci 9:149–176. doi:10.1016/0167-9457(90)90025-9
    OpenUrlCrossRef
  39. ↵
    Melmoth DR, Grant S (2006) Advantages of binocular vision for the control of reaching and grasping. Exp Brain Res 171:371–388. pmid:16323004
    OpenUrlCrossRefPubMed
  40. ↵
    Newsome L (1972) Visual angle and apparent size of objects in peripheral vision. Percept Psychophys 12:300–304. doi:10.3758/BF03207209
    OpenUrlCrossRef
  41. ↵
    Nicolini C, Fantoni C, Mancuso G, Volcic R, Domini F (2014) A framework for the study of vision in active observers. In: Proc. of SPIE, Human vision and electronic imaging XIX (Rogowitz BE, Pappas TN, and de Ridder H, eds), Vol. 9014, 901414. doi:10.1117/12.2045459
    OpenUrlCrossRef
  42. ↵
    Power RP, Graham A (1976) Dominance of touch by vision: generalization of the hypothesis to a tactually experienced population. Perception 5:161–166. doi:10.1068/p050161 pmid:951165
    OpenUrlCrossRefPubMed
  43. ↵
    R Core Team (2020) R: a language and environment for statistical computing [Computer software manual]. Vienna. Retrieved from https://www.R-project.org/.
  44. ↵
    Rock I, Harris CS (1967) Vision and touch. Sci Am 216:96–107. pmid:6042536
    OpenUrlPubMed
  45. ↵
    Rock I, Victor J (1964) Vision and touch: an experimentally created conflict between the two senses. Science 143:594–596. doi:10.1126/science.143.3606.594 pmid:14080333
    OpenUrlAbstract/FREE Full Text
  46. ↵
    Rosenholtz R (2016) Capabilities and limitations of peripheral vision. Annu Rev Vis Sci 2:437–457. doi:10.1146/annurev-vision-082114-035733 pmid:28532349
    OpenUrlCrossRefPubMed
  47. ↵
    Schlicht EJ, Schrater PR (2007) Effects of visual uncertainty on grasping movements. Exp Brain Res 182:47–57. pmid:17503025
    OpenUrlCrossRefPubMed
  48. ↵
    Schneider B, Ehrlich DJ, Stein R, Flaum M, Mangel S (1978) Changes in the apparent lengths of lines as a function of degree of retinal eccentricity. Perception 7:215–223. pmid:652481
    OpenUrlCrossRefPubMed
  49. ↵
    Schot WD, Brenner E, Smeets JBJ (2010) Robust movement segmentation by combining multiple sources of information. J Neurosci Methods 187:147–155. pmid:20096305
    OpenUrlCrossRefPubMed
  50. ↵
    Senna I, Andres E, McKyton A, Ben-Zion I, Zohary E, Ernst MO (2021) Development of multisensory integration following prolonged early-onset visual deprivation. Curr Biol 31:4879–4885. pmid:34534443
    OpenUrlCrossRefPubMed
  51. ↵
    Sivak B, MacKenzie CL (1990) Integration of visual information and motor output in reaching and grasping: the contributions of peripheral and central vision. Neuropsychologia 28:1095–1116. doi:10.1016/0028-3932(90)90143-C pmid:2267060
    OpenUrlCrossRefPubMed
  52. ↵
    Sivak B, MacKenzie CL (1992) The contributions of peripheral vision and central vision to prehension. In: Vision and Motor Control (Protcau L, Elliott D, eds), pp 233–259. San Diego: Elsevier.
  53. ↵
    Smeets JB, Brenner E (1999) A new view on grasping. Motor Control 3:237–271. pmid:10409797
    OpenUrlCrossRefPubMed
  54. ↵
    Smeets JB, van der Kooij K, Brenner E (2019) A review of grasping as the movements of digits in space. J Neurophysiol 122:1578–1597. pmid:31339802
    OpenUrlCrossRefPubMed
  55. ↵
    Sperandio I, Kaderali S, Chouinard PA, Frey J, Goodale MA (2013) Perceived size change induced by nonvisual signals in darkness: the relative contribution of vergence and proprioception. J Neurosci 33:16915–16923. doi:10.1523/JNEUROSCI.0977-13.2013
    OpenUrlAbstract/FREE Full Text
  56. ↵
    Steinbach MJ, Held R (1968) Eye tracking of observer-generated target movements. Science 161:187–188. pmid:5657071
    OpenUrlAbstract/FREE Full Text
  57. ↵
    Strasburger H, Rentschler I, Jüttner M (2011) Peripheral vision and pattern recognition: a review. J Vis 11:13. pmid:22207654
    OpenUrlAbstract/FREE Full Text
  58. ↵
    Tagliabue M, McIntyre J (2014) A modular theory of multisensory integration for motor control. Front Comput Neurosci 8:1. doi:10.3389/fncom.2014.00001 pmid:24550816
    OpenUrlCrossRefPubMed
  59. ↵
    Thompson JG, Fowler KA (1980) The effects of retinal eccentricity and orientation on perceived length. J Gen Psychol 103:227–232. pmid:7441222
    OpenUrlCrossRefPubMed
  60. ↵
    Van Doorn GH, Richardson BL, Wuillemin DB, Symmons MA (2010) Visual and haptic influence on perception of stimulus size. Atten Percept Psychophys 72:813–822. doi:10.3758/APP.72.3.813 pmid:20348585
    OpenUrlCrossRefPubMed
  61. ↵
    Verheij R, Brenner E, Smeets JBJ (2012) Grasping kinematics from the perspective of the individual digits: a modelling study. PLoS One 7:e33150. pmid:22412997
    OpenUrlCrossRefPubMed
  62. ↵
    Volcic R, Domini F (2016) On-line visual control of grasping movements. Exp Brain Res 234:2165–2177. pmid:26996387
    OpenUrlCrossRefPubMed
  63. ↵
    Volcic R, Fantoni C, Caudek C, Assad JA, Domini F (2013) Visuomotor adaptation changes stereoscopic depth perception and tactile discrimination. J Neurosci 33:17081–17088. pmid:24155312
    OpenUrlAbstract/FREE Full Text
  64. ↵
    Watt SJ, Bradshaw MF (2000) Binocular cues are important in controlling the grasp but not the reach in natural prehension movements. Neuropsychologia 38:1473–1481. doi:10.1016/S0028-3932(00)00065-8
    OpenUrlCrossRefPubMed
  65. ↵
    Watt SJ, Bradshaw MF, Rushton SK (2000) Field of view affects reaching, not grasping. Exp Brain Res 135:411–416. doi:10.1007/s002210000545
    OpenUrlCrossRefPubMed
  66. ↵
    Wiesing M, Kartashova T, Zimmermann E (2021) Adaptation of pointing and visual localization in depth around the natural grasping distance. J Neurophysiol 125:2206–2218. pmid:33949885
    OpenUrlPubMed
  67. ↵
    Wijntjes MWA, Volcic R, Pont SC, Koenderink JJ, Kappers AML (2009) Haptic perception disambiguates visual perception of 3D shape. Exp Brain Res 193:639–644. pmid:19199097
    OpenUrlCrossRefPubMed

Synthesis

Reviewing Editor: Christoph Michel, Universite de Geneve

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Guido Maiello.

Both reviewers agree that the study is interesting, the experiments are well designed and executed and provide some novel insights into the role of haptic cues in the absence of reliable visual cues. However, both reviewers have some major and minor comments that should be considered in a revision of the paper.

An important aspect that both reviewers point out is the relevance of object size information to control grasping and refer to work by Smeets and Brenner. The possibility that information of the position is more relevant than information about the size needs to be discussed as potential alternative interpretation of the results.

Another aspect raised by one of the reviewers is the relevance of the position of the fingers on the stimuli and the alignment with the other hand. This point should also be discussed.

We also encourage you to consider sharing your data and the analysis code on a public repository if the paper is accepted for publication.

Finally, a few minor comments of reviewer 1 should also be considered in the revision.

For your information, I append the relevant reviewer comments below.

Please also consider using estimation statistics which eNeuro highly recommends.

Reviewer 1:

Major Comment

Brenner and Smeets have been strongly advocating that object size information is not used to control grip aperture (for a recent review of this position, see Smeets et al, 2019). Rather, these authors propose that individual digits are aimed at specific positions on the surface of the objects, and that differences in grip aperture to difference objects size are simply a correlate of this control strategy. I think this possibility should be discussed in the current manuscript. It is possible that even what the authors call “haptic size” information could be haptic position information. I would ask the authors to discuss whether this possibility is consistent with their results, and whether this affects the interpretation of their findings.

Smeets, J. B., van der Kooij, K., & Brenner, E. (2019). A review of grasping as the movements of digits in space. Journal of neurophysiology, 122(4), 1578-1597.

Minor Comments

• Page 8: “of ∼10 mm in the horizontal or vertical direction from the fixation point, the trial was discarded and repeated later in the condition.” Could the authors specify this in degrees of visual angle?

• Page 8: “participants underwent a training session of ten trials before each condition.” Could the authors clarify whether participants trained for 10 trials on all conditions (i.e. 50 practice trials in total)?

• Page 9: It would be useful to see equations describing the Bayesian linear mixed-effects models fit to the data.

• Page 12: “The peak velocity was credibly higher in... V compared to H (Figure 3B).” I think this is incorrect, the 95% HDIs for the H-V comparison overlap with zero.

• Page 12: “A tendency of observing a credibly smaller peak... was also observed” This sentence is oddly worded and does not make much sense.

• Figures 3 and 4. The main text does not follow the order of the figure panels and forces the reader to jump up and down between Figures 3 and 4. I would suggest the authors reorganize these two figures. They could either group both figures into one, or split the figures into Figure 3 describing results for Central Vision, and Figure 4 describing results for Peripheral vision.

• Figure 5B,C and Figure 8B,C: Some measure of variability around the trajectory data in panels B and C might be useful (e.g. within subject confidence intervals shown as shaded regions around the mean trajectories). I also do not fully understand what figure panels 5B and 8B are meant to be showing.

• Page 20: “removing the haptic size cue dramatically reduced the scaling of the grip aperture, which scaled less than when both the size and position haptic cues were available (Figure 6D).” I don't feel like the results in figure 6D look all that dramatic... Although I agree that the effect is clear in figures 7E,F.

• Discussion: the authors might find it interesting to look at Maiello et al (2018) for another example of preserved sensorimotor coupling between eye and hand movements under degraded visual conditions.

Maiello, G., Kwon, M., & Bex, P. J. (2018). Three-dimensional binocular eye-hand coordination in normal vision and with simulated visual impairment. Experimental brain research, 236(3), 691-709.

Reviewer 2:

1. In the pVH condition in Experiment 2 (as well as in the H, VH and pVH conditions in Experiment 1), participants placed the index finger and thumb of their left hand on the front and back surfaces of the stimuli. Were participants given any specific instructions about the placement of the left-hand fingers along the vertical axis (both in Experiment 1 and 2)? Can you say anything about whether participants would, in general, horizontally align the fingers of their right hand with those of the left hand when grasping the stimulus? Does it matter that in the pVHP condition the left hand was placed below the area where the stimuli were grasped? E.g., if one had used the stimuli of the pVH condition and had instructed participants to place their left hand towards the bottom of the stimuli, then that could have provided some idea about the role of this displacement. Could you maybe say something about this?

2. It seems to me that the results for objects placed in the periphery could be explained by those theories about grasping that propose that size estimates are not necessary in grasping since the fingers move towards the endpoints of objects (e.g., Smeets & Brenner, Motor Control, 3(3), 1999). Especially, the findings for pVHP seem to be what one would expect based on those theories because of the conflicting haptic and visual signals about the endpoints. Could you maybe elaborate why, based on the data presented here, you think that size estimates are required and point out how the results in the manuscript relate to previous result?

Back to top

In this issue

eneuro: 9 (3)
eNeuro
Vol. 9, Issue 3
May/June 2022
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Visual Uncertainty Unveils the Distinct Role of Haptic Cues in Multisensory Grasping
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Visual Uncertainty Unveils the Distinct Role of Haptic Cues in Multisensory Grasping
Ivan Camponogara, Robert Volcic
eNeuro 31 May 2022, 9 (3) ENEURO.0079-22.2022; DOI: 10.1523/ENEURO.0079-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Visual Uncertainty Unveils the Distinct Role of Haptic Cues in Multisensory Grasping
Ivan Camponogara, Robert Volcic
eNeuro 31 May 2022, 9 (3) ENEURO.0079-22.2022; DOI: 10.1523/ENEURO.0079-22.2022
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Experiment 1
    • Experiment 2
    • Discussion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • grasping
  • haptics
  • multisensory integration
  • peripheral vision
  • visual uncertainty

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Dendritic Compartmentalization of Learning-Related Plasticity
  • Thalamocortical Projections Are Significantly Impaired in the R6/2 Mouse Model of Huntington’s Disease
  • Attention Cueing in Rivalry: Insights from Pupillometry
Show more Research Article: New Research

Sensory and Motor Systems

  • Optoception: perception of optogenetic brain perturbations
  • Attention Cueing in Rivalry: Insights from Pupillometry
  • Evidence for Independent Processing of Shape by Vision and Touch
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.