Reconstructing speech from human auditory cortex

PLoS Biol. 2012 Jan;10(1):e1001251. doi: 10.1371/journal.pbio.1001251. Epub 2012 Jan 31.

Abstract

How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

Publication types

  • Research Support, N.I.H., Extramural

MeSH terms

  • Algorithms
  • Auditory Cortex / physiology*
  • Brain Mapping*
  • Computer Simulation
  • Electrodes, Implanted
  • Electroencephalography
  • Female
  • Humans
  • Linear Models
  • Male
  • Models, Biological
  • Speech Acoustics*