Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Cognition and Behavior

Individual Differences in Cognition and Perception Predict Neural Processing of Speech in Noise for Audiometrically Normal Listeners

Sana Shehabi, Daniel C. Comstock, Kelsey Mankel, Brett M. Bormann, Soukhin Das, Hilary Brodie, Doron Sagiv and Lee M. Miller
eNeuro 18 March 2025, 12 (4) ENEURO.0381-24.2025; https://doi.org/10.1523/ENEURO.0381-24.2025
Sana Shehabi
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Daniel C. Comstock
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kelsey Mankel
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
2Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee 38152
3School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38152
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Brett M. Bormann
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
4Neuroscience Graduate Group, University of California, Davis, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Soukhin Das
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
5Psychology Graduate Group, University of California, Davis, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hilary Brodie
6Departments of Otolaryngology | Head and Neck Surgery, University of California, Davis, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Doron Sagiv
6Departments of Otolaryngology | Head and Neck Surgery, University of California, Davis, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Lee M. Miller
1Center for Mind and Brain, University of California, Davis, Davis, California 95618
6Departments of Otolaryngology | Head and Neck Surgery, University of California, Davis, Davis, California 95616
7Neurobiology, Physiology and Behavior, University of California, Davis, Davis, California 95616
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Lee M. Miller
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Continuous multitalker spatial attention task procedure. The figure illustrates the visual cues presented on screen to participants during the task while they listened to a series of short stories. The target and masker (when present) alternate between spatial locations. Only the target story is present in the mono-talker condition, but the spatial location of the talker still switches sides as shown by the red line. Participants are instructed to focus on the side of the screen indicated by the “+” symbol, which marks the location of the target story. They are also instructed to press the spacebar when they hear a color word spoken by the target speaker (red “X” symbols) and ignore any color words in the masker story (blue “O” symbols).

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    Histograms illustrating the distribution of performance scores across five cognitive and psychoacoustic predictors. Larger RT difference scores (Flanker, Stroop, and Trail Making) indicate greater response (cognitive) interference in each domain (i.e., slower RTs for the incongruent/alternating trials than the congruent/sequential trials).

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Attentional modulation of speech-evoked neural activity in dual-talker conditions at electrode Fz. Error bars indicate standard error. A, Grand average ERP waveforms of the target and masker speech. B, Difference waveforms between target and masker stories.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Attentional modulation of speech-evoked neural activity comparing mono- and dual-talker conditions at electrode Fz. Error bars indicate standard error. A, Grand average ERP waveforms comparing the dual-talker target condition to the mono condition. B, Difference waveforms (mono-target), where positive values indicate larger N1 amplitudes in the dual-talker target condition than in the mono condition. C, Grand average ERP waveforms comparing the dual-talker masker condition to the mono condition. D, Difference waveforms (mono-masker speech), where negative values indicate smaller N1 amplitudes in the dual-talker masker condition compared with the mono condition.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Effect plots illustrating the relationship between each predictor and the N1 amplitude difference. All other predictors are held constant. The shaded regions represent 95% confidence intervals for the N1 amplitude differences. N1 amplitude differences are calculated as the N1 amplitude for the target talker minus that of the masker talker; lower values on the y-axis correspond to larger differences in N1 amplitude between the target and masker streams. Arrows on the x-axis denote better performance on the cognitive and psychoacoustic tests.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    The 95% confidence intervals for the regression coefficients of each predictor.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Cognitive and psychoacoustic abilities predict attentional modulation of speech representations. Results of the LOOCV. The graph shows the predicted and actual N1 amplitude differences for each participant.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Bivariate correlations between each predictor and the N1 amplitude differences. Arrows on the x-axis denote better performance on the cognitive and psychoacoustic tests.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Histogram illustrating the distribution of proportion of color word hits in the continuous multitalker spatial attention task. Arrows on the x-axis denote better performance on the cognitive and psychoacoustic tests.

  • Figure 10.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 10.

    Effect plots illustrating the relationship between each predictor and the proportion of color word hits. All other predictors are held constant. The shaded regions represent 95% confidence intervals for the proportion of color word hits.

  • Figure 11.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 11.

    A scatterplot showing the relationship between the proportion of color word hits and N1 amplitude differences. Arrows on the x-axis denote better performance on the cognitive and psychoacoustic tests.

Tables

  • Figures
    • View popup
    Table 1.

    Results of the multiple regression analysis

    βSEtpGDW
    Intercept0.0000.1490.0001.000
    Flanker0.0470.1680.2770.7840.005
    Stroop0.5980.1783.3700.0030.166
    TMT−0.0090.182−0.5180.6110.026
    Reading Span0.6490.1673.8940.0010.278
    TFS−0.3590.182−1.9710.0640.108
    • Criterion variable: N1 amplitude difference between target and masker stories in the continuous multitalker spatial attention task. Predictors: performance on (1) Flanker, (2) Stroop, (3) TMT, (4) TFS, and (5) Reading Span. All variables were z-standardized. GDW indicates the relative importance of each predictor in the model. Overall adjusted R2 = 0.467; p = 0.005.

    • View popup
    Table 2.

    Pairwise Pearson's correlation coefficients of the performance on cognitive tasks

    FlankerStroopTMTReading spanTFS
    Flankerr = 1.000r = −0.369r = 0.065r = 0.245r = −0.127
    p = 0.076p = 0.763p = 0.248p = 0.555
    Stroopr = 1.000r = 0.156r = −0.393r = 0.196
    p = 0.467p = 0.057p = 0.358
    TMTr = 1.000r = −0.023r = 0.520
    p = 0.917p = 0.009
    Reading Spanr = 1.000r = −0.075
    p = 0.727
    TFSr = 1.000
    • View popup
    Table 3.

    Results of the second multiple regression analysis

    βSEtpGDW
    Intercept0.0000.2180.0001.0000.000
    Flanker−0.0200.246−0.0820.9350.003
    Stroop0.0700.2600.2690.7910.002
    TMT−0.3480.267−1.3050.2080.067
    Reading Span0.1350.2440.5510.5890.009
    TFS0.2300.2670.8610.4000.020
    • This model is using the same five cognitive and psychoacoustic predictors to predict the proportion of color word hits in the continuous multitalker spatial attention task. All variables were z-standardized. Overall adjusted R2 = −0.144; p = 0.828.

Back to top

In this issue

eneuro: 12 (4)
eNeuro
Vol. 12, Issue 4
April 2025
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Individual Differences in Cognition and Perception Predict Neural Processing of Speech in Noise for Audiometrically Normal Listeners
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Individual Differences in Cognition and Perception Predict Neural Processing of Speech in Noise for Audiometrically Normal Listeners
Sana Shehabi, Daniel C. Comstock, Kelsey Mankel, Brett M. Bormann, Soukhin Das, Hilary Brodie, Doron Sagiv, Lee M. Miller
eNeuro 18 March 2025, 12 (4) ENEURO.0381-24.2025; DOI: 10.1523/ENEURO.0381-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Individual Differences in Cognition and Perception Predict Neural Processing of Speech in Noise for Audiometrically Normal Listeners
Sana Shehabi, Daniel C. Comstock, Kelsey Mankel, Brett M. Bormann, Soukhin Das, Hilary Brodie, Doron Sagiv, Lee M. Miller
eNeuro 18 March 2025, 12 (4) ENEURO.0381-24.2025; DOI: 10.1523/ENEURO.0381-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Conclusion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • auditory attention
  • auditory processing
  • cognition
  • event-related potential
  • N1 component
  • speech-in-noise

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Deletion of endocannabinoid synthesizing enzyme DAGLα in Pcp2-positive cerebellar Purkinje cells decreases depolarization-induced short-term synaptic plasticity, reduces social preference, and heightens anxiety
  • Release of extracellular matrix components after human traumatic brain injury
  • Action intentions reactivate representations of task-relevant cognitive cues
Show more Research Article: New Research

Cognition and Behavior

  • Visual Stimulation Under 4 Hz, Not at 10 Hz, Generates the Highest-Amplitude Frequency-Tagged Responses of the Human Brain: Understanding the Effect of Stimulation Frequency
  • Transformed visual working memory representations in human occipitotemporal and posterior parietal cortices
  • Neural Speech-Tracking During Selective Attention: A Spatially Realistic Audiovisual Study
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.