PT - JOURNAL ARTICLE AU - Bröhl, Felix AU - Keitel, Anne AU - Kayser, Christoph TI - MEG activity in visual and auditory cortices represents acoustic speech-related information during silent lip reading AID - 10.1523/ENEURO.0209-22.2022 DP - 2022 Jun 21 TA - eneuro PG - ENEURO.0209-22.2022 4099 - http://www.eneuro.org/content/early/2022/06/21/ENEURO.0209-22.2022.short 4100 - http://www.eneuro.org/content/early/2022/06/21/ENEURO.0209-22.2022.full AB - Speech is an intrinsically multisensory signal and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech-derived features in source localized MEG recordings that were obtained while participants listened to speech or viewed silent speech. Using a mutual-information framework we provide a comprehensive assessment of how well temporal and occipital cortices reflect the physically presented signals and unique aspects of acoustic features that were physically absent but may be critical for comprehension. Our results demonstrate that both cortices feature a functionally specific form of multisensory restoration: during lip reading they reflect unheard acoustic features, independent of co-existing representations of the visible lip movements. This restoration emphasizes the unheard pitch signature in occipital cortex and the speech envelope in temporal cortex and is predictive of lip reading performance. These findings suggest that when seeing the speaker's lips, the brain engages both visual and auditory pathways to support comprehension by exploiting multisensory correspondences between lip movements and spectro-temporal acoustic cues.Significance statementLip reading is central for speech comprehension in acoustically impoverished environments. Recent studies show that the auditory and visual cortex can represent acoustic speech features from purely visual speech. It is still unclear, however, what information is represented in these cortices and if this phenomenon is related to lip reading comprehension. Using a comprehensive conditional mutual information analysis applied to magnetoencephalographic data, we demonstrate that signatures of acoustic speech arise in both cortices in parallel, even when discounting for the physically presented stimulus. In addition, the auditory but not the visual cortex activity was related to successful lip reading across participants.