Elsevier

NeuroImage

Volume 97, 15 August 2014, Pages 196-205
NeuroImage

Towards obtaining spatiotemporally precise responses to continuous sensory stimuli in humans: A general linear modeling approach to EEG

https://doi.org/10.1016/j.neuroimage.2014.04.012Get rights and content

Abstract

Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans.

Introduction

Commonly used noninvasive neuroimaging techniques for studying sensory information processing in humans face the well-known trade-off between spatial and temporal resolution. While functional magnetic resonance imaging (fMRI) provides insights into the functional specialization of different sensory areas with high spatial resolution (e.g., Grill-Spector and Malach, 2004), this technique is incapable of investigating the precise timing of neural activations. Electroencephalography (EEG), on the other hand, permits examination of sensory stimulus processing with excellent temporal resolution. This is usually done by averaging epochs of the EEG surrounding multiple presentations of a simple discrete stimulus to obtain the so-called event-related potential (Luck, 2005). While this approach has been valuable in both research and clinical settings for evaluating sensory and perceptual processing, it presents a significant challenge in terms of interpreting its cortical generators (e.g., Ales et al., 2010a, Ales et al., 2014, Kelly et al., 2013a, Kelly et al., 2013b). This is because time-locked averaging of EEG to simple, discrete stimuli necessarily produces an ERP comprising activity from temporally overlapping generators all along the sensory cortical hierarchy (Foxe and Simpson, 2002).

The most common approach to overcoming this shortcoming is to estimate the activity of specific ERP generators using inverse source modeling (Di Russo et al., 2005, Scherg and Berg, 1996). This requires finding one of an infinite number of possible model solutions to the inverse problem that minimizes residual variance, subject to some assumptions and constraints. Over the years researchers have increasingly sought to ground these constraints in findings from the animal physiology literature, and by incorporating information from imaging methods with superior spatial resolution such as MRI/fMRI (Ahveninen et al., 2006, Ales et al., 2010b, Di Russo et al., 2012, Pitzalis et al., 2012). While this approach has increased our knowledge about the origin of ERPs, it is labor intensive, expensive and involves simplifying assumptions (e.g., that only a few brain areas/sources contribute to the response; that the head can be modeled as a sphere; that neighboring sources should have similar activation timing and amplitude; that the conductivity of the brain is uniform; and that the dipole sources are perpendicular to the cortical sheet), which make the results difficult to validate (see Kelly et al., 2013a).

An alternative approach to improving the spatial specificity of ERPs has been attempted by using the discrete presentation of specially designed stimuli with the goal of preferentially activating functionally specific neural subpopulations. For example, researchers examining visual motion processing have derived visual evoked potentials (VEPs) to the motion onset of a stimulus (Kuba et al., 2007). While this approach can be useful, it still produces a VEP comprising contributions from a multitude of cortical areas, including the earliest visual areas (Pitzalis et al., 2012).

As mentioned above, the spatial resolution of fMRI enables the localization of functionally specialized cortical areas. In practice this is usually accomplished within the framework of the general linear model (GLM). The GLM, which has been at the heart of fMRI analyses for the past 20 years (Poline and Brett, 2012), explicitly assumes that the BOLD timecourse can be modeled as the convolution of stimulus feature variations with a hemodynamic impulse response function (HRF). This has allowed fMRI to identify visual areas specialized for many specific visual stimulus features including coherent motion (Costagli et al., 2014, Rees et al., 2000), color (Wade et al., 2008), objects (Grill-Spector and Malach, 2004), and faces (Kanwisher et al., 1997), and auditory areas specialized for features such as pitch (Patterson et al., 2002), phonemes (Leaver and Rauschecker, 2010), words (Binder et al., 2000) and auditory motion (Belin and Zatorre, 2000, Warren et al., 2002).

The success of the GLM approach in fMRI suggests that it may be useful for helping to overcome the poor spatial resolution of EEG by allowing researchers to obtain interpretable responses from functionally specific cortical regions. This could lead to more spatially specific activations of cortex allowing easier analysis of the generative sources than is the case with standard time-locked averaging. Indeed this has already been seen in the specificity of responses obtained using model-based approaches to sensory EEG. The very limited amount of EEG work done using such a model-based approach has mostly focused on controlling the phase reversal or pulsed presentation of simple checkerboard patterns using binary pseudorandom sequences and then estimating an impulse response (or kernel) function using reverse correlation (Baseler and Sutter, 1997, James, 2003). The use of such stimuli leads to EEG variations dominated by the activity of visual areas that are most sensitive to large rapid contrast changes, which leads to an impulse response function that reflects activity of early visual areas, likely dominated by V1 (Klistorner et al., 2005; but see Ales et al., 2010a, Kelly et al., 2013a). As such, these response functions display greater spatial specificity than standard time-locked averaged VEPs while retaining a detailed temporal profile. Our group has taken this a step further by removing the restriction that stimuli must be modulated by a binary sequence, and by explicitly modeling linear changes in the EEG as a function of stimulus contrast (Lalor et al., 2006, Lalor et al., 2009). We have suggested that our impulse response function – which we termed the VESPA (Visual Evoked Spread Spectrum Analysis) – is dominated by V1 activity on the basis of comparison with the earliest component of the VEP (Murphy et al., 2012) and retinotopic mapping (Lalor et al., 2012).

In this study, we show that it is possible to extend this model-based approach to other visual stimulus features. Specifically we show that it is possible to obtain robust, spatiotemporally precise measures of the neural processing of coherent motion. We do this by regressing the EEG data against the controlled, stochastic modulation of the coherent motion of dot stimuli. We have chosen to modulate this particular stimulus feature because its processing has been shown to be affected in a number of clinical disorders (e.g., schizophrenia — Chen, 2011; dyslexia — Eden et al., 1996; autism spectrum disorder — Spencer et al., 2000), and, as such, research on these disorders could benefit if we had a clean EEG-based measure of coherent motion processing. We contend that our methodological innovation, if generalized, could open up new investigative avenues for EEG researchers interested in sensory processing and we present discussion on how it may increase the sensitivity and specificity of sensory measures of a number of clinical disorders.

Section snippets

Subjects

Twelve subjects (4 female) aged between 22 and 35 (mean = 26.2 years; standard deviation = 3.24 years) participated in the experiments. All subjects had normal or corrected-to-normal vision. The experiment was undertaken in accordance with the Declaration of Helsinki. The Ethics Committee of the School of Medicine, Trinity College Dublin approved the experiment and all subjects provided written, informed consent.

Stimuli

Three visual stimuli were used in the experiments: a pattern reversal checkerboard

Results

After averaging the responses across all subjects, we calculated the GFP of this average and, in doing so, identified robust responses for each of our three stimulus types (Fig. 2). That we obtained such clear responses for the VEP and contrast VESPA stimuli was not surprising given previous work. However, despite the fact that the coherent motion stimulus involved no modulation of luminance or contrast, a robust response can be seen to that stimulus with a broadly similar timing to the other

Discussion

By controlled and continuous modulation of two distinct visual stimulus features we have used a linear modeling approach to show that it is possible to rapidly derive spatiotemporally precise responses from noninvasive scalp recorded EEG in humans. This approach, if generalized to other sensory stimulus features, has the potential to increase the flexibility and naturalness of EEG sensory stimulation paradigms and to facilitate the rapid, inexpensive acquisition of responses whose specific

Acknowledgments

The authors are grateful to A. Power, D. Durstewitz, J. Barrett, S. Josephs and C. Kerskens for assistance in collecting the data, to S. Kelly, J. Lucan, A. Snyder, and I. Robertson for useful discussions, and to G. Loughnane and J O'Sullivan for comments on the manuscript.

References (62)

  • S.P. Kelly et al.

    The cruciform model of striate generation of the early VEP, re-illustrated, not revoked: a reply to Ales et al. (2013)

    NeuroImage

    (2013)
  • M. Kuba et al.

    Motion-onset VEPs: characteristics, methods, and diagnostic use

    Vis. Res.

    (2007)
  • E.C. Lalor et al.

    The VESPA: a method for the rapid estimation of a visual evoked potential

    NeuroImage

    (2006)
  • E.C. Lalor et al.

    Generation of the VESPA response to rapid contrast fluctuations is dominated by striate cortex: evidence from retinotopic mapping

    Neuroscience

    (2012)
  • D. Lehmann et al.

    Reference-free identification of components of checkerboard-evoked multichannel potential fields

    Electroencephalogr. Clin. Neurophysiol.

    (1980)
  • V. Litvak et al.

    Electromagnetic source reconstruction for group studies

    NeuroImage

    (2008)
  • R.D. Patterson et al.

    The processing of temporal pitch and melody information in auditory cortex

    Neuron

    (2002)
  • S. Pitzalis et al.

    Parallel motion signals to the medial and lateral motion areas V6 and MT +

    NeuroImage

    (2013)
  • J.B. Poline et al.

    The general linear model and fMRI: does love last forever?

    NeuroImage

    (2012)
  • A.C. Snyder et al.

    The countervailing forces of binding and selection in vision

    Cortex

    (2012)
  • L. Stenbacka et al.

    fMRI of peripheral visual field representation

    Clin. Neurophysiol.

    (2007)
  • R. VanRullen et al.

    Perceptual echoes at 10 Hz in the human brain

    Curr. Biol.

    (2012)
  • V. von Pföstl et al.

    Motion sensitivity of human V6: a magnetoencephalography study

    NeuroImage

    (2009)
  • M.B. Wall et al.

    The representation of egomotion in the human brain

    Curr. Biol.

    (2008)
  • J.D. Warren et al.

    Perception of sound-source motion by the human brain

    Neuron

    (2002)
  • J. Ahveninen et al.

    Task-modulated “what” and “where” pathways in human auditory cortex

    Proc. Natl. Acad. Sci. U. S. A.

    (2006)
  • J.M. Ales et al.

    On determining the intracranial sources of visual evoked potentials from scalp topography: a reply to Kelly et al.

    NeuroImage

    (2014)
  • A. Bartels et al.

    Natural vision reveals regional specialization to local motion and to contrast-invariant, global flow in the human brain

    Cereb. Cortex

    (2008)
  • P. Belin et al.

    ‘What’, ‘where’ and ‘how’ in auditory cortex

    Nat. Neurosci.

    (2000)
  • J.R. Binder et al.

    Human temporal lobe activation by speech and nonspeech sounds

    Cereb. Cortex

    (2000)
  • V. Cardin et al.

    Sensitivity of human visual and vestibular cortical regions to egomotion-compatible visual stimulation

    Cereb. Cortex

    (2010)
  • Cited by (0)

    Funded by: The Irish Research Council for Science, Engineering and Technology through the EMBARK initiative.

    View full text