Speech-specific auditory processing: where is it?

https://doi.org/10.1016/j.tics.2005.03.009Get rights and content

Are speech-specific processes localized in dedicated cortical regions or do they emerge from developmental plasticity in the connections among non-dedicated regions? Here we claim that all the brain regions activated by the processing of auditory speech can be re-classified according to whether they respond to non-verbal environmental sounds, pitch changes, unfamiliar melodies, or conceptual processes. We therefore argue that speech-specific processing emerges from differential demands on auditory and conceptual processes that are shared by speech and non-speech stimuli. This has implications for domain- vs. process-specific cognitive models, and for the relative importance of segregation and integration in functional anatomy.

Introduction

An increasingly popular trend in cognitive neuroscience is to label macro-anatomical brain structures with functional labels that correspond to those generated by classical behavioural paradigms. For example, in the domain of reading, an area in the left occipito-temporal sulcus has been labelled the visual word form area (VWFA) [1]. These labels generate the impression that there are brain structures dedicated to the cognitive functions they describe. Moreover, this controversial approach allows the same brain areas to have multiple functional labels that depend on the investigators' field of interest. For example, the visual word form area [1] has also been labelled the ‘lateral occipital tactile-visual area’ [2].

The alternative, but more difficult approach, is to identify functional labels that describe the process driving activation. This requires a two-way line of investigation: in addition to the conventional delineation of brain areas that respond to a pre-specified function, we also need to identify the functions that activate a pre-specified brain area. In this article, we adopt this alternative approach by considering the non-verbal stimuli and tasks that activate auditory speech processing areas. The speech processing areas we consider have been defined on the basis of group studies. In other words, we are focusing on functional anatomy that is consistent over populations rather than unique to any one individual (see Box 1). Our claim is that, contrary to popular belief, the human brain has not developed macro-anatomical brain structures that are dedicated to speech processing. Instead, we argue that speech-specific processing emerges at the level of functional connectivity among distributed brain regions, each of which participates in processes that are engaged by both speech and non-speech tasks.

Section snippets

Left hemisphere auditory processing of speech

Auditory processing of meaningful speech involves several auditory and linguistic processes that include, at least, (i) on-line analysis of the spectrotemporal structure of the acoustic stimulus; (ii) recognition of phonetic cues and features; (iii) syntactic analysis; and (iv) access to and retrieval of semantic information. Functional neuroimaging allows us to test whether these processes are linked to specific brain regions, and if so, what the response properties of these regions are. Many

Right hemisphere processing and laterality issues

Speech comprehension is classically associated with the left temporal lobe [31] but the right hemisphere also contributes to speech processing. Figure 2 illustrates that a right anterior temporal area (A) is activated by speech and melodies; right mid-temporal regions (C and E) are activated by speech and environmental sounds. In contrast to these areas, a right posterior temporal area (B) is activated by environmental sounds and melodies but not speech. Direct comparison of speech with

Implications for functional and physiological models

At a functional level, recognition of speech relative to other acoustic signals: (i) is more dependent on the precise temporal arrangement of spectro-temporal features, such as noise bursts, silent gaps and formant transitions; (ii) is most intrinsically linked to speech production processes 35, 36; and (iii) has the richest, most subject-independent, conceptual associations. Our point is that none of these characteristics necessitate brain regions dedicated to speech processing. The neural

Conclusions

In this article, we have argued that there are no macro-anatomical structures dedicated to speech in the human brain. This does not exclude the possibility that other neurophysiological techniques such as depth electrode recordings could, in the future, reveal local functional specialization at a smaller, micro-anatomical scale (see Box 1). Nevertheless, we can accommodate the absence of speech-specific brain regions by hypothesizing that speech processing emerges from differential demands on

Acknowledgements

C.P. and T.G. are supported by the Wellcome Trust. G.T. is supported by the BBSRC and the ESRC.

References (54)

  • K. Grill-Spector et al.

    fMR-adaptation: a tool for studying the functional properties of human cortical neurons

    Acta Psychol. (Amst.)

    (2001)
  • R.N. Henson et al.

    Neural response suppression, aemodynamic repetition effects, and behavioural priming

    Neuropsychologia

    (2003)
  • L. Cohen

    The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients

    Brain

    (2000)
  • A. Amedi

    Convergence of visual and tactile shape processing in the human lateral occipital complex

    Cereb. Cortex

    (2002)
  • S.K. Scott

    Identification of a pathway for intelligible speech in the left temporal lobe

    Brain

    (2000)
  • M.H. Davis et al.

    Hierarchical processing in spoken language comprehension

    J. Neurosci.

    (2003)
  • J.R. Binder

    Human temporal lobe activation by speech and nonspeech sounds

    Cereb. Cortex

    (2000)
  • J.T. Crinion

    Temporal lobe regions engaged during normal speech comprehension

    Brain

    (2003)
  • A.L. Giraud et al.

    The constraints functional neuroimaging places on classical models of word processing

    J. Cogn. Neurosci.

    (2001)
  • A.L. Giraud

    Contributions of sensory input, auditory search and verbal comprehension to cortical activity during speech processing

    Cereb. Cortex

    (2004)
  • C.J. Mummery

    Functional neuroimaging of speech perception in six normal and two aphasic subjects

    J. Acoust. Soc. Am.

    (1999)
  • A. Vouloumanos

    Detection of sounds in the auditory stream: event-related fMRI evidence for differential activation to speech and nonspeech

    J. Cogn. Neurosci.

    (2001)
  • C. Narain

    Defining a left-lateralized response specific to intelligible speech using fMRI

    Cereb. Cortex

    (2003)
  • G. Hickok

    Auditory-motor interaction revealed by fMRI: speech, music, and working memory in area Spt

    J. Cogn. Neurosci.

    (2003)
  • M. Tervaniemi

    Lateralized automatic auditory processing of phonetic versus musical information: a PET study

    Hum. Brain Mapp.

    (2000)
  • R.J. Zatorre

    Lateralization of phonetic and pitch discrimination in speech processing

    Science

    (1992)
  • S. Koelsch

    Music, language and meaning: brain signatures of semantic processing

    Nat. Neurosci.

    (2004)
  • Cited by (0)

    View full text