Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Cognition and Behavior

Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes

Thorge Haupt, Marc Rosenkranz and Martin G. Bleichner
eNeuro 3 January 2025, 12 (1) ENEURO.0287-24.2024; https://doi.org/10.1523/ENEURO.0287-24.2024
Thorge Haupt
1Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg 26129, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Thorge Haupt
Marc Rosenkranz
1Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg 26129, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Marc Rosenkranz
Martin G. Bleichner
1Neurophysiology of Everyday Life Group, Department of Psychology, Carl von Ossietzky Universität Oldenburg, Oldenburg 26129, Germany
2Research Center for Neurosensory Science, Carl von Ossietzky Universität Oldenburg, Oldenburg 26129, Germany
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Martin G. Bleichner
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    A, This panel illustrates the levels of abstraction with which acoustic features depict different tone sequences. Furthermore, it highlights the impact of the acoustic detail the acoustic onset, envelope, and mel-spectrogram capture. In the first tone sequence, onsets reveal the timing of sound occurrence, while the envelope extends this information by showing both the timing and duration of the sound; the mel-spectrogram further adds details about the frequency content. For the second tone sequence, the limitation of onsets becomes apparent, as they do not convey any information about amplitude changes. In the third sequence, the frequency changes are distinctly and exclusively captured by the mel-spectrogram, with the onset and envelope methods remaining indifferent to these frequency variations. The fourth sequence underscores a similar limitation of the envelope, which fails to distinguish frequency variations in a continuous tone. The lower section of this panel demonstrates the crucial role of meta-information. For instance, understanding which sound in an experiment requires a behavioral response cannot be discerned from the acoustic features alone. Meta-information thus complements the acoustic analysis by providing essential contextual and functional information. B, This panel showcases acoustic feature representations—onsets, envelope, and mel spectrum—for a complex naturalistic soundscape. Additionally, it highlights the diversity of meta-information that can describe these soundscapes. The upper row features continuous classification from Yamnet, providing labels for individual sound segments. The lower row presents manual annotations, categorizing sounds into different conditions (narrow and wide) and distinct types (beep, irrelevant, alarm, and speech sounds). These labels, informed by task-specific knowledge, offer deeper contextual and descriptive insights into the soundscape, illustrating the multi-layered nature of acoustic analysis in naturalistic settings.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    This figure depicts the process of forward modeling. From the audio several features are extracted. Using the EEG data and the features the temporal response function is derived for each EEG channel. This temporal response function is then used to predict unseen neural data based on the corresponding feature information. The resulting prediction is correlated with the actual signal. If discrete features are used, then not every sample can be predicted. This is visualized in the pie charts, showing the proportion of samples that can be predicted. AC = acoustic features, SI = sound identity.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    Exemplary model building of combining acoustic onset information with alarm tones and the respective CPs. A, Statistical comparison of prediction distributions based on AC = Acoustic, combined with either SI = sound identity, CP = cognitive prior information. *p < 0.05, **p < 0.01, ***p < 0.000, N.S. = non-significant B, Example of model weights for acoustic onsets alone, extended by SI, or CP information. A comparison of the condition effect to the results of Rosenkranz et al. (2023) can be found in the Extended Data Figure 3-1. The x-axis displays the time lags for the different weights. In the last plot, the nw = narrow and we = wide condition weights are displayed for the alarm tone.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    A, A comparison of the prediction distributions of the acoustic features and their combinations. *p < 0.05, **p < 0.01, ***p < 0.000, N.S. = non-significant. A detailed comparison between the envelope and mTRF envelope can be found in Extended Data Figure 4-1. B, The result of the variance partitioning showing the unique contribution and shared explained data. The letters correspond to Table 1, which displays the values of the variance partitioning analysis.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    A, Displayed are the distributions of prediction accuracies for a subset of different features and feature combinations. Each dot represents the average condition and channel correlation score of one participant. These distributions are the collapsed results over proportion of explainable data as presented in B, The results of the Wilcoxon sign rank test are presented exemplary at *p < 0.05, **p < 0.01***p < 0.000, N.S. = non-significant. B, prediction distributions share the same y-axis as that of part A. Additionally, we computed the proportion of data that is explainable using the different features. For the continuous features such as the envelope or mel-spectrogram, the variance was added for visual purposes.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    The left panel shows the results of the simulation study. The x-axis depicts the different simulated SNR levels in dB. The y-axis shows the correlational values. The two graphs show the mean correlational values as a function of SNR levels for either the whole segment (blue) or only where data was predicted (red). The vertical lines show the estimated SNR level for the onset, beep, irrelevant, and alarm tone respectively. The panel on the right shows the prediction scores of the actual data.

Tables

  • Figures
  • Extended Data
    • View popup
    Table 1.

    Shows the variance explained for the acoustic (AC) and sound identity (SI) features

    r2runq2 A ∩BA ∩CB ∩CA ∩B ∩C
    Beep (A)0.00090.00070.00030.00020.00010.0003
    SIIrr (B)0.00280.0027
    Alarm (C)0.00410.0041
    Mel (A)0.00720.00310.00230.00390.00200.0020
    ACOns (B)0.00230
    mEnv (C)0.00520.0012
    • Specifically, the first two columns display the total and unique variance explained by each feature. The last four columns highlight the shared variance ( ∩), where A refers to the first, B to the second, and C to the third feature, for SI and AC respectively (Fig. 4).

    • View popup
    Table 2.

    Table shows the correlational values for the different cross-prediction pairs

    base
    ACSI
    onsetmTRF envelopealarmirrelevantbeep
    onset10.760.720.810.49
    mTRF envelope0.7910.700.690.38
    Crossalarm0.690.2810.850.51
    irrelevant0.720.440.8110.63
    beep0.480.180.630.721
    • The table should be read from left to top. The feature labels on the left are the models that were trained on these features. The labels at the top are feature information used to predict segments using the model on the left. A visualization of the results can be found in Extended Data Table 2-1 and 2-2. All correlations are significant at p < 0.01.

Extended Data

  • Figures
  • Tables
  • Extended Data 1

    Download Extended Data 1, ZIP file.

  • Figure 3-1

    Upper panel shows the TRF weights of the alarm tone for the two conditions. The grey shaded area marks the window of interest as reported by Rosenkranz et al. (2023). The black lines are the clusters detected by the permutation testing with the corresponding topographies. The lower panel shows the same but for the beep tone. Download Figure 3-1, TIF file.

  • Figure 4-1

    The left panel shows the distribution of the prediction accuracy for the envelope and mTRF envelope model. Significance is tested at * p < 0.05, **p < 0.01, ***p < 0.000. The panel in the middle shows the model weights averaged over participants, conditions, and channels. The panel on the right highlights the frequency decomposition of the difference curve of the mTRF envelope and the envelope. Download Figure 4-1, TIF file.

  • Table 2-1

    This figure shows the results of the cross-prediction analysis. On the x-axis are the correlational score of the testing data segment with the prediction based on feature information that the model was initially trained on. On the y-axis are the correlational scores for the same segment and feature information as on the x-axis, but using model weights derived from the depicted feature. Download Table 2-1, TIF file.

  • Table 2-2

    This figure shows the results of the cross-prediction analysis for the sound identity marker. On the x-axis are the correlational scores of the testing data segment with the prediction based on feature information that the model was initially trained on. On the y-axis are the correlational scores for the same segment and feature information as on the x-axis, but using model weights derived from the depicted feature. Download Table 2-2, TIF file.

Back to top

In this issue

eneuro: 12 (1)
eNeuro
Vol. 12, Issue 1
January 2025
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner
eNeuro 3 January 2025, 12 (1) ENEURO.0287-24.2024; DOI: 10.1523/ENEURO.0287-24.2024

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Exploring Relevant Features for EEG-Based Investigation of Sound Perception in Naturalistic Soundscapes
Thorge Haupt, Marc Rosenkranz, Martin G. Bleichner
eNeuro 3 January 2025, 12 (1) ENEURO.0287-24.2024; DOI: 10.1523/ENEURO.0287-24.2024
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Method
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • acoustic representations
  • electroencephalography (EEG)
  • naturalistic sound perception
  • neural encoding

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Optimizing and Benchmarking Machine Learning and Traditional Synaptic Event Detection Pipelines in Neurophysiology Experiments
  • Postnatal development of pyramidal neurons excitability and synaptic inputs in mouse gustatory cortical circuits
  • Refinement of locomotor activity during development is correlated to increased dopaminergic signaling in larval zebrafish.
Show more Research Article: New Research

Cognition and Behavior

  • Environmental Enrichment Attenuates Fentanyl-Seeking Behavior and Protects Against Stress-Induced Reinstatement in Both Male and Female Rats
  • Dopamine and calcium dynamics in the nucleus accumbens core during food seeking
  • Spatiotemporal Dynamics in Pre-speech Semantic Category Decoding: An intracranial EEG Study.
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2026 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.