Abstract
Limb apraxia (LA) refers to a high-order motor disorder characterized by the inability to reproduce transitive actions on commands or after observation. Studies demonstrate that action observation and action execution activate the same networks in the human brain, and provides an onlooker’s motor system with appropriate cognitive, motor and sensory-motor cues to flexibly implementing action-sequences and gestures. Tellingly, the temporal dynamics of action monitoring has never been explored in people suffering from LA. To fill this gap, we studied the electro-cortical signatures of error observation in human participants suffering from acquired left-brain lesions with (LA+) and without (LA–) LA, and in a group of healthy controls (H). EEG was acquired while participants observed from a first-person perspective (1PP) an avatar performing correct or incorrect reach-to-grasp a glass action in an immersive-virtual environment. Alterations of typical EEG signatures of error observation in time (early error positivity; Pe) and time-frequency domain (theta band-power) were found reduced in LA+ compared with H. Connectivity analyses showed that LA+ exhibited a decreased theta phase synchronization of both the frontoparietal and frontofrontal network, compared with H and LA–. Moreover, linear regression analysis revealed that the severity of LA [test of upper LA (TULIA) scores] was predicted by mid-frontal error-related theta activity, suggesting a link between error monitoring capacity and apraxic phenotypes. These results provide novel neurophysiological evidence of altered neurophysiological dynamics of action monitoring in individuals with LA and shed light on the performance monitoring changes occurring in this disorder.
Significance Statement
Combining EEG and immersive virtual reality we provide novel neurophysiological evidence of altered performance monitoring in apraxic patients. We show that the observation of incorrect actions performed by an avatar seen from a first-person perspective (1PP) elicited reduced electrocortical markers of error detection in apraxic patients. Tellingly, apraxia severity predicted reduction of mid-frontal theta activity, regardless of brain lesion volume and patients’ cognitive capacity. The results shed new light on the possible neurophysiological signatures of the link between limb apraxia (LA) and performance monitoring. Moreover, our EEG-virtual reality paradigm may provide a new tool for investigating the brain dynamics of monitoring action errors also in brain damaged patients with motor limitations.
Introduction
Limb apraxia (LA) is a disorder of higher order motor control mainly associated with damage of left frontoparietal brain networks (Buxbaum et al., 2014; Buxbaum and Randerath, 2018; Bizzozero et al., 2000). LA is characterized by a complex combination of perceptual (Halsband et al., 2001), motor (Candidi et al., 2017), and cognitive (Rothi et al., 1985) deficits whose interaction ultimately affects the implementation of transitive and intransitive movements on verbal command or imitation. According to the “affordance competition hypothesis” (Cisek, 2007), potential actions compete against each other, and information is collected to bias and solve this competition until a response is selected. Competition arises from mere sensory exposition to an object and its physical properties that automatically triggers conflicting action schema for “affording” the object itself (Cisek, 2007; Cisek and Kalaska, 2010), and may lead to performance errors if the conflict is not resolved (Cooper, 2007). Tellingly, apraxic patients not only display deficits in action execution but also in action understanding and simulation (Rothi et al., 1985; Cubelli et al., 2000), in mental action imagery task (Sirigu et al., 1999), in generating internal models for action execution (Buxbaum et al., 2005), and in the judgment of the correctness of seen or heard (Pazzaglia et al., 2008a, b; Aglioti and Pazzaglia, 2010, 2011; Canzano et al., 2014) actions. Moreover, deficits in action monitoring were positively correlated with difficulties in action execution (Pazzaglia et al., 2008a) thus, corroborating the hypothesis of a direct matching between action perception and execution. In line with the affordance competition hypothesis, studies suggest that errors in apraxia could be because of a deficient resolution of competition between action selection (Jax and Buxbaum, 2013; Buxbaum et al., 2014; Watson and Buxbaum, 2015) or to a failure to resolve affordance competition (Rounis and Humphreys, 2015). In keeping with Bekkering et al. (2000), when an action is observed, it is the action goal that is observed, and not just a movement. Action observation and execution are bidirectionally linked, so that motor skills may improve as an effect of merely seeing others moving (Ertelt et al., 2007; Cross et al., 2009; Ernst and Steinhauser, 2017). Moreover, performing specific actions improves the ability to perceive them (Casile and Giese, 2004; Lepage and Théoret, 2006). Monitoring actions through observation implies the evaluation of their correctness. EEG studies demonstrate that observation of errors in one’s own and others’ actions elicits specific markers over the mid-frontal cortex, namely, (1) the observer error-related negativity (oERN), the observer error positivity (oPe; van Schie et al., 2004; de Bruijn et al., 2007; Ridderinkhof et al., 2009), and (2) increased power in the theta band (4–8 Hz; Cavanagh et al., 2009, 2010, 2012). These patterns of electro-cortical brain activity are likely associated to conflict processing and resolution (Cavanagh and Frank, 2014). Conflict arises when a unique (correct) action should be selected among a set of competing (incorrect) actions and serves as an alarm signal conveyed from the mid-frontal to the lateral prefrontal and posterior brain areas to increase cognitive control over actions (Steinhauser and Yeung, 2010; Cohen and Cavanagh, 2011; van Driel et al., 2012; Yoshida et al., 2012; Boldt and Yeung, 2015).
The present study aims to investigate the temporal dynamics of action monitoring in patients suffering from LA by linking the “affordance competition theory” and the “conflict monitoring model.” Crucially, both theories consider conflict processing as a fundamental mechanism by which the performance monitoring system exerts motor and cognitive control over actions. In view of the affordance-competition hypothesis, we predict that patients with LA tend to experience high levels of conflict during goal-directed action monitoring, which arises from the competition between correct and incorrect action schemas. This may lead to an exaggerated burden of unresolved conflict that impairs the operation of the action monitoring system. Capitalizing on previous similar reports (Pavone et al., 2016; Pezzetta et al., 2018; Spinelli et al., 2018), we recorded EEG in left-brain damaged individuals with and without LA and in a control group who observed through immersive virtual-reality an avatar performing correct or incorrect actions. In line with previous studies on error monitoring, awareness, and gesture recognition in patients with apraxia (Canzano et al., 2014, 2016; Candidi et al., 2017; Scandola et al., 2021), we expected an impairment in patients with LA when the error monitoring system is called into play, that is when a mismatch between predicted and observed action goal occurs. Acquiring EEG signatures of performance monitoring during the observation of correct and incorrect actions provided novel information on the integrity of the error detection system in LA.
Materials and Methods
Participants
Twelve right-handed, left-brain damaged patients were recruited from the local Neuro-Rehabilitation Unit between March and August 2016. They had suffered from focal vascular lesions (e.g., patients with traumatic brain injuries were not included) between 292 and 1095 d (LA+: mean = 580.33; SD = 252.48; LA–: mean = 687.17, SD = 207.08); thus, they were tested during chronic phase (Karnath and Rennig, 2017). A primary inclusion criterion was the ability to perform the task (EEG-VR session), and to understand the task instructions. All the participants signed an informed consent for participation. Based on a neuropsychological assessment (Table 1) of their symptoms, participants were divided in two groups: (1) patients with (LA+; n = 6, 4 males and 2 females) and (2) without (LA–; n = 6, 3 males and 3 females) LA. The two groups were matched for age (mean age ± SD: LA+ = 63.1 ± 14.4 years, LA– = 58.5 ± 14.2 years) and education (LA–: 12 ± 2.0; LA+: 13.8 ± 3.4). An age-and-gender-matched (mean age ± SD: 62.4 ± 11.2, 6 males, 4 females) sample of 10 healthy participants (H) was also tested. An age-and-gender matched (mean age ± SD: 62.4 ± 11.2) sample of 10 healthy participants (H) was also tested. The study was conducted in accordance with the guidelines of Declaration of Helsinki and approved by the local Ethics Committee.
In order to inform on the patients’ cognitive profile, standard tests and batteries for general neuropsychological assessment were administered (for details, see Table 1), including general cognitive abilities (Raven et al., 1988), executive functions (nonverbal subtests of the frontal assessment battery; Appollonio et al., 2005), and spatial attention (line bisection; Wilson et al., 1987). Verbal comprehension and denomination subtests of the Aachener aphasia test (Luzzatti et al., 1996) were used to assess language comprehension deficits. Given that the experimental task implied the mere observation of correct versus erroneous upper limb actions, the assessment of apraxia focused on tests where actions implied the use of upper limbs, namely, the ideomotor [test of upper LA (TULIA); Vanbellingen et al., 2010], and the ideational apraxia tests (De Renzi and Lucchelli, 1988). The two groups did not differ in ideational apraxia (see Table 1), suggesting that semantic knowledge concerning actions was preserved. While LA+ presented difficulty in understanding words with respect to LA–, no such effect was found for sentence comprehension. This result, together with the nature of the task, suggests that comprehension did not play a major role in the experimental effects.
Analysis of brain lesions was conducted for LA– and LA+ by means of the MRIcron software (https://www.nitrc.org/projects/mricron; Rorden and Brett, 2000). The MRI/CT scans available for all the patients were mapped by drawing on the standard T1-weighted MRI template (ICBM152) of the Montreal Neurologic Institute (MNI) coordinate system, approximately oriented to match the Talairach space (Talairach, 1988). The standard template (size: 181× 217 × 181 mm, voxel resolution: 1 mm2) was rotated on the three planes to match each patient’s MRI/CT scan orientation as closely as possible. Then, two experienced clinicians (who were blind as to which patients the scan belonged to) traced any lesion manually on the axial slices of the rotated template, while another one checked all the drawings in a double-blind procedure (de Haan and Karnath, 2018). For each patient the outcome was a map of the damaged areas with each voxel labeled as 0 (intact) or 1 (lesioned). All the lesion maps were rotated back to the canonical orientation to align them to the standard stereotaxic MNI space (in 2 × 2 × 2 mm voxel). After that, maps were filtered with a custom mask based on the ICBM152 template to exclude the voxels of lesions outside the white and gray matter brain tissues. Each patient’s lesion was superimposed onto T1 templates to calculate the number of lesioned voxels in various cerebral areas, and the center of the mass of each damaged area. This was then overlapped onto the automatic anatomical labeling (AAL) template (Tzourio-Mazoyer et al., 2002) to provide information on the gray matter, and onto the JHU white-matter atlas (Susumu Mori, Laboratory of Brain Anatomical MRI, Johns Hopkins University) for the white matter. LA+ and LA– lesion overlap and lesion subtraction were performed to highlight patients’ lesional patterns. For each region, the MNI coordinates of the center of mass along with the number (n) and percentage (%) of clustering voxels are provided for LA+, LA–, and subtraction lesion map (Tables 4, 5). Analysis of tract disconnection probability were also conducted, by mapping the lesion from each patient onto tractography reconstructions of white matter pathways obtained from a group of healthy controls (Rojkova et al., 2016). We quantified the severity of the disconnection by measuring the probability of specific tracts (Thiebaut de Schotten et al., 2014) using Tractotron software as part of the BCBtoolkit (Foulon et al., 2018; http://www.toolkit.bcblab.com; Table 6). We computed t test comparison with false discovery rate correction to verify significant differences between groups.
Apparatus and virtual environment
Participants were seated in a four screens (3 × 3 × 2.5 m) cave automatic virtual environment (CAVE) system (Cruz-Neira et al., 1993; Fig. 2A). 3D images were alternatively eye-by-eye displayed at a refresh rate of 60 Hz by Nvidia Stereo Glasses, which were in turn interfaced with an Intersense 900 ultrasonic system (Thales Visionix; 6 degrees of freedom). The virtual scenario included a virtual room (3 × 3 × 2 m) with a virtual table, and an avatar with both its right (R) and left (L) upper limb on the table (Fig. 2B). Atop the table was a yellow support with the virtual glass placed on it. The virtual scenario and the avatar were drawn on a 1:1 scale by Maya 2011 and 3ds Max 2011 (Autodesk, Inc) respectively, and rendered by XVR 2.1 (Tecchia et al., 2010). The avatar’s kinematics were implemented using Halca libraries (Gillies and Spanlang, 2010). Marker events were sent to the EEG by means of a custom-made circuit governed by a digital input/output device (PoKeys 55; PoLabs; https://www.poscope.com).
Experimental design
Expanding on previous reports (Pavone et al., 2016; Pezzetta et al., 2018; Spinelli et al., 2018), the main task used in this study implied that participants observed correct or incorrect reach-to-grasp a glass actions performed by an avatar seen from a first-person perspective (1PP). Participants were immersed in the virtual scenario and their physical body was aligned with the virtual body to maximize embodiment. The participants’ real body was occluded by a black cloth. Each trial started with an intertrial interval (ITI) lasting 1250 ms (±250 ms) in which both avatar’s upper limbs rested on the table. After a synthesized voice instructed the avatar to grasp the glass (2000 ms), participants observed one of the two avatar’s limbs (R or L, depending on the experimental block) reaching and grasping the virtual glass (Fig. 1B). Each reach-to-grasp action lasted 1000 ms, such that the first 700 ms were identical for all actions, and the last 300 ms defined a correct (C) or incorrect (I) outcome. While correct actions resulted in a successful grasping of the virtual glass, incorrect actions depicted a virtual limb directed five-virtual-centimeter right-ward the virtual glass (or five virtual-cm left-ward in the case of left arm movements). A total of 2000 ms elapsed after the completion of each action, before the virtual limb returned to its starting position. The whole experiment counted 120 trials, split in two blocks of 60 trials, each containing R or L avatar’s actions exclusively. The order of blocks (R and L) was counter-balanced within participants for each group (LA+, LA–, and H). Correct (n = 36) and incorrect (n = 24) actions were randomly presented across the trial-list of each block.
Subjective ratings of virtual embodiment (i.e., sense of ownership and vicarious agency) were collected in the 25% of trials (i.e., 30 trials). Participants were asked to separately rate on two visual-analog scales (VASs): (1) how strongly the virtual arm was felt as part of their body (feeling of ownership; Ow), and (2) how much they felt in control of the virtual arm (feeling of Vicarious Agency; Ag). Ratings were acquired at the end of avatar’s actions, by asking participants to quantify the strength of their feelings by positioning a virtual stick on the VAS ranging from 0 to 100, where 0 indicated “no feeling” and 100 “highest feeling.” The different VASs were sequentially displayed on a black box appearing ahead the virtual glass and disappearing immediately after an answer was provided. Each participant provided a total of 15 self-reports of Ow and Ag in each block, nine for C and six for I. The order of Ow and Ag self-reports was counter-balanced across trials.
EEG recording and analysis
EEG data were acquired by means of tin electrodes embedded in a fabric cup (Electro-Cap International), according to the 10–10 system, from 60 scalp sites (Spinelli et al., 2018). The electrode on the right earlobe served as online reference, while the ground electrode was placed on AFz. A bipolar electro-oculogram was recorded from two electrodes placed on the lateral end of the bicanthal plane. The signal was recorded by a Neuroscan SynAmpsRT (Compumedics, Ltd) at 1000 Hz, and filtered with a hardware bandpass of 0.05–200 Hz. All impedances were kept below 5 KΩ. EEG traces were processed using the FieldTrip toolbox (Oostenveld et al., 2011; release: 20170607) in MATLAB R2016a (The MathWorks).
For each subject, continuous EEG signals were filtered offline with a 0.5-Hz high-pass FIR filter (onepass, zero-phase) and locked to the onset of the avatar’s arm-path deviation (i.e., 300 ms before action-offset). This time point corresponded to the latest timeframe in which observed grasping trajectories were still identical between correct and incorrect actions (Spinelli et al., 2018). Epochs of 6 s (±3 s around this trigger) were extracted and sorted according to the ACCURACY of the observed avatar’s action (two levels: C and I), and to the avatar’s LIMB that was observed (two levels: R and L). Blinks and oculomotor artifacts were removed by the independent components analysis (ICA). On average, 3.6 components (range: 1–7) referring to blink/oculomotor artifacts were discarded. Trials exhibiting residual artifacts were discarded by means of (1) a summary plot of three metrics (variance, z score, kurtosis) of all channels, as implemented in FieldTrip, and (2) a further visual inspection of all segments and all channels. Details of remaining trials are shown in Table 2. The obtained artifact-free time series were then re-referenced to the common-average reference and baseline corrected with respect to a time window of 200 ms before the trigger (i.e., the onset of avatar’s arm-path deviation). Time- (Event Related Potentials, ERPs), time-frequency (TF) domain and phase connectivity analyses were conducted.
For ERPs analysis, the across-trials average for each condition [LIMB (R, L) × ACCURACY (C, I)] was obtained in the time range of −200–800 ms. This time window was considered for statistical analyses. TF analysis was conducted by means of the wavelets method. Width (or cycles) of each wavelet was 4 (i.e., 4/2πf). Frequency resolution was 1 Hz (range: 4–30 Hz). Length of the time window for computation was 2.6 s (±1.3 s around the trigger). Time-resolution was 50 ms. TF spectra were corrected to the relative signal change (% change) of the event period (from 0 to 1000 ms) with respect to the baseline (from −200 to 0). The average across trails for each condition was calculated in the time window from −200 to 1000. This time window was used for statistical analyses. Functional connectivity analysis was conducted by computing the trial-by-trial phase locking value (PLV; Lachaux et al., 1999) for across channels combinations. Imaginary coherence was considered to compensate for volume conduction issues (Vinck et al., 2011). Oscillatory phase synchronization between channels is considered a connectivity measure that reflects the exchange of information between neuronal populations (Sauseng and Klimesch, 2008).
Statistical analysis
In order to statistically estimate time- and time-frequency differences between groups (LA+ vs LA– vs H) and within conditions (LIMB and ACCURACY) at each electrode, a nonparametric Monte Carlo permutation was conducted (1000 repetitions). As first, a permutation distribution of the significance probabilities for dependent-samples t tests between R versus L was calculated separately for each group. Since no significant results were obtained (all p > 0.05), voltage/power values of both conditions (R and L) were averaged. On these obtained time-series, dependent-samples t tests were conducted to estimate the differences between C versus I separately for each group using nonparametric cluster-based permutation analysis as implemented in Fieldtrip (cluster-α = 0.05). Contrasts between groups were computed by means of three independent-samples t tests (H vs LA+, H vs LA–, LA– vs LA+) using voltage/power values difference between incorrect and correct conditions (I minus C). To correct for multiple comparisons, a cluster-based correction was applied to all tests as implemented in FieldTrip (cluster-α = 0.05; Maris and Oostenveld, 2007).
Like for ERPs and TF analyses, PLV values of the condition LIMB (L and R) were averaged as no difference was found (p > 0.05). Transient theta phase activity from mid-frontal to lateral prefrontal and parietooccipital brain areas have been shown to reflect a functional mechanism to increase post-error cognitive control and sensory attention (Cavanagh et al., 2009; Cohen et al., 2009; Cohen and Cavanagh, 2011) respectively. Thus, PLVs were calculated for all channel combination and all frequencies in the time window from −200 and 1000 ms. Then, connectivity measure between mid-frontal (electrodes FC1, FCz, FC2, C1, Cz, C2), lateral prefrontal (electrodes F3, F5, F4, F6), and parietooccipital (electrodes PO7, PO3, POz, PO4, PO8, O1, Oz, and O2) scalp regions were extracted for each participant in three separate time windows, i.e., 200–400, 400–600, and 600–800 ms. Dependent-samples t tests were conducted to test any difference between conditions (C vs I). Differences between groups (LA+ vs LA– vs H) were estimated by means of a between-subject ANOVA, using groups (LA+ vs LA– vs H) as main factor and the differences between incorrect and correct condition (I minus C) as dependent measures.
Finally, the relation between signs and symptoms of LA and brain markers of error monitoring was investigated by means of a multiple linear regression model predicting error-related band power changes from LA phenotypes (LA+, LA–), TULIA scores (normalized in z scores), total brain lesioned volume (c3 normalized in z scores) and the Frontal Assessment Battery (FAB) scores (normalized in z scores); i.e., Yi = β0 + βiXi + interactions terms+ εI. Data of all the patients (LA+ and LA–) were included in the linear model, thus allowing to test which of the main predictors or their interaction terms, predicted error-related EEG dynamics. The brain lesioned volume and the FAB scores were chosen in the regression model to control for two clinically relevant indices that could account for by the variance between the three groups of patients, namely, any structural difference between patients’ brain and any difference in executive abilities. In keeping with the time-frequency analyses, power spectra in R and L condition were averaged, and the difference between incorrect minus correct condition was obtained. From these obtained values, β coefficients for the main effects and the interactions terms, and their p-values were calculated for each electrode and each time (from 500 to 1000 ms)-frequency (from 4 to 30 Hz) point across the whole patients’ sample.
Results
Time-domain analysis
Permutation tests resulting from the contrast between incorrect versus correct conditions revealed significant positive clusters only for H (Fig. 3A). In particular, a significant voltage increase was found in incorrect trials in the 430- to 550-ms time window, at a mid-frontal (t-max: 2.74, p < 0.001, electrode FCz; Fig. 3B) and occipital (t-max: 3.27, p < 0.001, electrode Oz) cluster. No negative cluster was found from this analysis. The independent-samples t tests conducted between groups (LA+ vs LA–, LA+ vs H, LA– vs H; Fig. 3C) revealed positive clusters only for the contrast between H and LA+. In this, H exhibited increased voltage in the time window from 420 to 560 ms at mid-frontal (t-max: 2.36, p < 0.001; electrode FC3) and parietooccipital (t-max: 3.01, p < 0.001, electrode Oz) clusters.
Time-frequency domain analysis
As for ERPs, the contrast between incorrect versus correct conditions revealed significant clusters only for the H group. More specifically, a significant increase of theta-band (4–8 Hz) was found in incorrect trials in the time range running from 300 to 650 ms at a mid-frontal cluster (t-max: 4.78, p < 0.001, electrode FCz; Fig. 4A). The independent-samples t tests between groups (LA+ vs LA– vs H; Fig. 4B) revealed positive clusters only for the contrast H versus LA+, accounted for by the fact that H exhibited increased theta power in the time range 420–575 ms at mid-frontal (t-max: 2.39, p < 0.001; electrode FC1) and parietooccipital (t-max: 2.74, p < 0.001, electrode CP1) clusters.
Connectivity analysis
Mid-frontal to lateral-frontal connectivity
The dependent-samples t tests conducted between incorrect versus correct condition revealed significant effects only for H (t = 2.18, p < 0.016) in the time window from 400 to 600 ms. The effect was explained by an increased theta phase connectivity for the observation of incorrect actions (Fig. 5, left panel). No further significant effect was found in any other time windows. The significant effect of the between-subjects ANOVA (F(2,43) = 5.43, p < 0.01) was explained by a lower theta phase connectivity in LA+ (mean: –0.02, range: –0.01–0.05) with respect to both LA– (mean: 0.04, range: –0.1–0.16; p < 0.05) and H (mean: 0.05, range: –0.05–0.26; p < 0.001) in the same time range (i.e., 400–600 ms). No further effect was found.
Mid-frontal to parietooccipital connectivity
The dependent-samples t tests computed between incorrect versus correct condition revealed multiple significant effects. An increased error-related theta phase synchronization was found for both LA– (t = 2.53, p < 0.02) and H (t = 2.68, p < 0.01) in the time window from 200 to 400 ms. This effect remained significant also in the subsequent time window (i.e., 400–600) only for H (t = 2.64, p < 0.02). No significant effect was found in the time window from 600 to 800 ms. The significant effect of the between-subjects ANOVA (F(2,43) = 3.91, p < 0.02) was explained by a decreased theta phase connectivity in LA+ (mean: –0.01, range: –0.01–0.03) with respect to both LA– (mean: 0.06, range: –0.12–0.15; p < 0.03) and H (mean: 0.04, range: –0.07–0.20; p < 0.05) from 200 to 400 ms. No further significant effect was found.
Predictive estimates of TULIA scores on frontal theta power
The linear regression model revealed a significant main effect of the TULIA test (F(12,5) = 3.2, p < 0.05, r2 = 0.72, r2 adjusted = 0.67) over a frontocentral cluster of electrodes (FC1, C1). More specifically, we found a significant direct relation (β = 0.85, p < 0.01) between theta power and TULIA scores in the time range 200–400 ms (Fig. 6A). No other main effect nor interaction were found for the other predictors (i.e., brain lesion volume, days after stroke, FAB scores, and words comprehension; Fig. 6B) within the same time window at that electrode site.
Descriptive statistics shows that S2 were the most prevalent errors (mean = 12; SD = 4.69), followed by S0 (mean = 4.83; SD = 3.25), S1 (mean = 4.33; SD = 1.03) and S3 (mean = 1; SD = 9.89). S2 errors refer to a difficulty of apraxic patients to correct the trajectory of a gesture, and committing errors without correction. S0 errors refer to severe problems in executing the movement, and S1 index problems in both trajectory and semantic content of the movement. S3 errors (the least frequent) include the correction of ongoing movements.
Tract disconnection probability
Tract disconnection probability (mean, standard deviation, and number of patients for each group that showed >0.5 probability of disconnection) for both LA+ and LA– are shown in Table 6; t test comparison with false discovery rate correction for multiple comparisons did not show significant differences between groups.
Subjective reports of virtual embodiment
Table 3 reports average ownership and vicarious agency ratings in LA+, LA–, and H. Individual ratings were entered in a mixed-design ANOVA with GROUP (LA+, LA–, H) as between-subjects factor, and EMBODIMENT (two levels: Ow vs Ag), ACCURACY (two levels: C vs I) and LIMB (two levels: R vs L) as within-subjects factors. Newman–Keuls post hoc test was adopted for multiple comparisons. The ANOVA resulted in a significant main effect of the ACCURACY (F(1,19) = 7.6, p < 0.02, η2 = 0.28), explained by overall higher values of embodiment for C (mean ± SD = 0.61 ± 0.25) with respect to I (mean ± SD = 0.56 ± 0.25) actions. No further significant main effect nor interaction were found (all ps > 0.15). Moreover, subjective scores of embodiment did not correlate with any of the error-related EEG signals, namely, oPe amplitude and theta-band activity (for Ow: LA+ = all ps > 0.2, LA– = all ps > 0.05, H = all ps > 0.07; for Ag: LA+ = all ps > 0.5, LA– = all ps > 0.1, H = all ps > 0.07).
Discussion
We explored in left brain-damaged people with or without apraxia, and in a control group of healthy individuals (H) the electrocortical dynamics of error observation by combining immersive virtual reality and EEG recording. Results in the time and time-frequency domain showed that observation of erroneous actions brought a suppression of early oPe and theta activity in LA+ and LA–. In addition, LA+ showed a significant difference when compared with H, that was not shown when H were compared with LA–, suggesting an impairment in error processing for LA+. In addition, LA+ highlighted aberrant theta phase synchronicity between frontofrontal and frontoparietal networks, with respect to both LA– and H. To the best of our knowledge, this study reports the first evidence of altered performance monitoring in patients with LA. Based on the theoretical framework of the conflict monitoring theories (Botvinick et al., 2001; Yeung et al., 2004) and of the affordance competition hypothesis (Cisek, 2007; Pezzulo and Cisek, 2016), we submit that this impairment could be driven by the LA patients’ original difficulty in selecting the appropriate action schema to implement goal-directed behaviors, and in suppressing inappropriate conflicting affordances arising from the observation of an object. Consequently, the excessive burden of unresolved conflict prevents patients from fluid action understanding and impairs the EEG dynamics that underpin appropriate performance monitoring.
The absence of the early Pe in the group of LA+ when compared with H provides novel evidence in support of our hypothesis. Early Pe is a P300-like positive-going component that differentiates from late Pe (Falkenstein et al., 2000) for maximally peaking over mid-frontal electrodes in error trials (Ullsperger et al., 2014), and for originating from mid-frontal cortical sources (Van Boxtel et al., 2005). Also, early Pe dissociates from the late Pe in terms of functional significance. In keeping with P300 event-related brain potential theories (Polich, 2007), early Pe seems to resamble the activity of a task-related, frontal cognitive control mechanism associated to automatic error processing (prediction errors or mismatch), whereas late Pe may be linked to higher-order processes, like memory processing or affective reactions to maladaptive/infrequent stimuli or internal model updating and potential adjustments (Falkenstein et al., 2000; Di Gregorio et al., 2018). In the present study, LA+ did not show the classical early Pe following incorrect trials; LA– did not show a difference between incorrect or correct actions. However, one can qualitatively appreciate how LA– showed a modulation in the time series of the ERP, that is not visible in the LA+; also, when contrasts between groups are performed, H showed a significant difference as compared with LA+, but not when compared with LA–. This suggests a reduced responsivity of LA+ performance monitoring system that interferes with the resolution of the conflict generated from the competition between incorrect action outcomes and correct action schema (Botvinick et al., 2001; Yeung et al., 2004). Interestingly, studies demonstrate that P300-like waveforms originate from phasic activity of the norepinephrine system and may underlie the learning processes responsible for subsequent motor improvement (Nieuwenhuis et al., 2005; Dayan and Yu, 2006). Therefore, the absence of early Pe in LA+, may not only index a defective conflict processing, but also an impaired ability to implement flexible behavioral adaptation in a cascade-like sequence of neurocognitive events. Another relevant result of our study is the absence of the oERN across all the subjects and experimental groups. Previous studies using virtual-reality (Pavone et al., 2016; Pezzetta et al., 2018; Spinelli et al., 2018) or other methods (van Schie et al., 2004; Bates et al., 2005; Koban et al., 2010; de Bruijn and von Rhein, 2012), reported that observation of others’ action errors evoked an oERN in the onlookers’ brain. Here, oERN suppression can be explained in terms of an age-dependent effect (Gehring and Knight, 2000; Nieuwenhuis et al., 2001; Mathewson et al., 2005), or in view of the novel evidence that errors can elicit error-positivity in the absence of an ERN (Di Gregorio et al., 2018; Pezzetta et al., 2021). While our results fit adequately with the above options, drawing firm conclusions is likely complicated by the original aim of this study and the characteristics of the sample. Absence of oERN was admittedly unexpected; therefore, future works should tackle this important issue using ad hoc developed experimental designs.
Analyses of brain oscillatory activity provide another important support for altered performance monitoring in apraxia. Indeed, our results indicate a significant error-related suppression of mid-frontal theta power in the group of LA+. Cognitive control over goal-directed behavior is a highly flexible process that integrates information coming from the actual context and specific task-related demands (Helfrich and Knight, 2016). A large-scale network governed by the prefrontal cortex and composed by distant and yet functionally related cortical and subcortical areas (Miller and Cohen, 2001), rhythmically orchestrates such integration. Electrophysiology evidence demonstrates that activity in the prefrontal cortex becomes significantly higher when deviant outcomes (Dürschmid et al., 2016) or errors (Fonken et al., 2016) are detected. EEG studies in nonhuman primates also demonstrate that this multiplexed computational activity is conducted in distinct frequency bands, time and brain (scalp) locations (Akam and Kullmann, 2014). Notably, in humans, an increase of mid-frontal theta power underlies error execution (Trujillo and Allen, 2007; Hanslmayr et al., 2008; Cavanagh et al., 2009, 2012; Munneke et al., 2015) and error observation (Pavone et al., 2016; Pezzetta et al., 2018; Spinelli et al., 2018). This effect has been convincingly associated to conflict processing and resolution (Cohen, 2014). Together with time-domain results, the suppression of mid-frontal theta power in LA+ patients suggests that conflict arising from the competition between correct and incorrect action schema is not adequately resolved in the patients’ performance monitoring system. Moreover, connectivity analyses show a decreased theta synchronicity between frontofrontal and frontoparietooccipital areas in LA+ with respect to both LA– and H. Phase synchronicity reflects a coherent burst of activity of neuronal populations in distant cortical regions. Such an alignment of brain oscillatory dynamics in time facilitates the communication between networks and ultimately enables efficient cognitive processing (Voloh et al., 2015; Daitch et al., 2013). Tellingly, frontofrontal and frontoparietal network dynamics has been suggested to play a crucial role in making fluid cognitive control (Gregoriou et al., 2009; Nácher et al., 2013; Phillips et al., 2014). EEG studies show that posterior theta phase enhancement in these networks underlies perceptually integration of maladaptive information, and represents a call to increase cognitive control for subsequent behavioral adjustment (Cavanagh et al., 2009; Cohen et al., 2009; Cohen and Cavanagh, 2011). That LA+ patients exhibit aberrant oscillatory patterns during action monitoring, suggests not only a reduced capacity of their performance monitoring system to resolve the conflict, but also a difficulty to capitalize on perceptual and sensorimotor information flow from action observation. This latter claim fits with previous reports showing that motor skills of apraxic patients may influence their visual action understanding, and vice versa (Pazzaglia et al., 2008a).
It should be noticed that we found no difference between correct and incorrect actions in LA+ and LA– in terms of theta and Pe signals absolute values. However, further contrasts between groups, obtained from incorrect minus correct actions, showed a significant difference between LA+ and H, but not between LA– and H, thus highlighting how H showed increased theta activity in response to errors, that was instead not found in LA+. The lack of a direct difference when comparing LA+ and LA– might be because of lack of sensitivity to pick up differences between patients’ groups because of the reduced sample. Tellingly, however, connectivity analyses in the theta range show that LA+ had lower theta as compared with both LA– and H both in the frontal and parietal regions, suggesting an impaired error-monitoring process in LA+. Another result that deserves discussion concerns the extent to which altered performance monitoring parallels the apraxic phenotypes. This was tested by means of a multiple linear regression model, predicting theta power activity from an index of the apraxic impairment (TULIA scores) and two other main factors that significantly differed between LA+ and LA–, i.e., lesioned brain volume and an index of general functionality of frontal lobes (FAB scores). Results show evidence for a direct relation between the severity of apraxia and error-related mid-frontal theta power, so that reduced error-related mid-frontal theta power was predicted by the severity of the disease (indexed by lower TULIA scores). This effect hints at the close link between the apraxic phenotype and the integrity of the performance monitoring system and confirms our hypothesis that symptoms of apraxia prevent patients’ ability to resolve the conflict generated by the observation of incorrect actions, regardless of the amount of lesioned cortical volume and of the patients’ impairment in frontal executive functions, as indexed by FAB scores. Table 4 and Table 5 report lesion data of LA+, LA− and the results of lesions subtraction (LA+ minus LA−). The lesion mapping data suggest that lesions to inferior frontal gyrus, rolandic operculum, insula, and putamen, as well as to superior frontooccipital and superior longitudinal fasciculi seem to differentiate the two groups. These patterns of results are in line with previous findings showing how LA+ exhibit behavioral deficits during prediction, gesture comprehension and error detection tasks (Kilner, 2011; Avenanti et al., 2013; Keysers and Gazzola, 2014; Urgesi et al., 2014). Moreover, the most significant difference between the two groups is represented by the involvement of the basal ganglia (i.e., putamen) and the insula in LA+ versus LA–. Crucially, these regions have been found to play a role in error detection and performance monitoring (Falkenstein et al., 2001; Klein et al., 2007; Yang et al., 2015). Importantly, the superior frontooccipital fasciculus and superior longitudinal fasciculus were also lesioned in the LA+ group, thus supporting the hypothesis that deficits in our apraxic patients might have been because of the association between frontotemporal, frontoparietal, and basal ganglia lesions.
A final point of discussion concerns the analysis of subjective reports. In keeping with previous studies (Padrao et al., 2016; Pavone et al., 2016) embodiment scores were lower during observation of erroneous with respect to correct actions. However, here we did not find any relation between error-related EEG signatures and subjective reports of embodiment, neither in healthy (H) nor in brain-damaged individuals (LA+ and LA–). One possible explanation may be due to collecting embodiment ratings (ownership and vicarious agency) only in the 25% of trials which, combined with the small sample size may have determined this lack of sensitivity. Alternatively, and in keeping with previous report (Spinelli et al., 2018), one may note that the relation between virtual embodiment and error-related brain signatures is merely correlative and not causative. Future work is needed to understand whether inducing embodiment of artificial (virtual) upper limbs might play any specific role in improving the action monitoring capacity in people suffering from higher-order motor disorders. The issue of patients’ sample size deserves discussion. Indeed, LA+ group and LA– count a relatively small number of individuals. This is mainly because of the adoption of very restrictive inclusion criteria based on socio-demographic data, brain-injury site, and individuals’ compliance to our EEG protocol in virtual reality. Therefore, while on the one hand the selection criteria reduced the sample size, on the other it prevented us from recruiting a nonhomogeneous patients’ sample and jumping to misleading conclusions. However, future studies with larger cohorts of patients are recommended to replicate these results. Furthermore, we maintained the unbalance of frequency of occurrence typical of error studies by including 48 incorrect trials and 72 correct ones. Previous methodological studies have shown that increasing the number of trials does not affect the reliability of error signatures and a minimum of 8 trials may be sufficient to reliably elicit ERN and Pe (Olvet and Hajcak, 2009; Pontifex et al., 2010). In conclusion, our results indicate reduced electrocortical activity of the performance monitoring systems in brain damaged patients with LA+ suggesting that ideomotor LA brings about difficulties in error processing when observing the actions of others. Our paradigm paves the way to potentially interesting new studies on the role that theta-band oscillatory entrainment over prefrontal cortices may play in facilitating patients’ performance monitoring. Moreover, our study casts fresh light on the neuro-cognitive architecture characterizing apraxia and thus has the potential to inspire novel rehabilitation protocols.
Acknowledgments
Acknowledgements: We thank all the patients and their relatives.
Footnotes
The authors declare no competing financial interests.
This work was supported by Progetti di Ricerca di Rilevante Interesse Nazionale (PRIN) Grants Edit. 2017, Prot. 2017N7WCLP.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.