Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Novel Tools and Methods

Improving Tracking of Selective Attention in Hearing Aid Users: The Role of Noise Reduction and Nonlinearity Compensation

Johanna Wilroth, Emina Alickovic, Martin A. Skoglund, Carine Signoret, Jerker Rönnberg and Martin Enqvist
eNeuro 29 January 2025, 12 (2) ENEURO.0275-24.2025; https://doi.org/10.1523/ENEURO.0275-24.2025
Johanna Wilroth
1Automatic Control, Department of Electrical Engineering, Linköping University, Linköping 581 83, Sweden
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Johanna Wilroth
Emina Alickovic
1Automatic Control, Department of Electrical Engineering, Linköping University, Linköping 581 83, Sweden
2Eriksholm Research Centre, Snekkersten DK-3070, Denmark
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Emina Alickovic
Martin A. Skoglund
1Automatic Control, Department of Electrical Engineering, Linköping University, Linköping 581 83, Sweden
2Eriksholm Research Centre, Snekkersten DK-3070, Denmark
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Martin A. Skoglund
Carine Signoret
3Disability Research Division, Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping 581 83, Sweden
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Carine Signoret
Jerker Rönnberg
3Disability Research Division, Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping 581 83, Sweden
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Jerker Rönnberg
Martin Enqvist
1Automatic Control, Department of Electrical Engineering, Linköping University, Linköping 581 83, Sweden
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Martin Enqvist
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Visual Overview

Figure
  • Download figure
  • Open in new tab
  • Download powerpoint

Visual Abstract

Abstract

Hearing impairment (HI) disrupts social interaction by hindering the ability to follow conversations in noisy environments. While hearing aids (HAs) with noise reduction (NR) partially address this, the “cocktail-party problem” persists, where individuals struggle to attend to specific voices amidst background noise. This study investigated how NR and an advanced signal processing method for compensating for nonlinearities in Electroencephalography (EEG) signals can improve neural speech processing in HI listeners. Participants wore HAs with NR, either activated or deactivated, while focusing on target speech amidst competing masker speech and background noise. Analysis focused on temporal response functions to assess neural tracking of relevant target and masker speech. Results revealed enhanced neural responses (N1 and P2) to target speech, particularly in frontal and central scalp regions, when NR was activated. Additionally, a novel method compensated for nonlinearities in EEG data, leading to improved signal-to-noise ratio (SNR) and potentially revealing more precise neural tracking of relevant speech. This effect was most prominent in the left-frontal scalp region. Importantly, NR activation significantly improved the effectiveness of this method, leading to stronger responses and reduced variance in EEG data and potentially revealing more precise neural tracking of relevant speech. This study provides valuable insights into the neural mechanisms underlying NR benefits and introduces a promising EEG analysis approach sensitive to NR effects, paving the way for potential improvements in HAs.

  • EEG
  • hearing aids
  • noise
  • noise reduction algorithms
  • nonlinearity compensation
  • temporal response functions

Significance Statement

Understanding how HAs with noise reduction (NR) improve selective auditory attention in noisy environments is crucial for future advancements. This study investigated the neural effects of NR in hearing-impaired listeners using Electroencephalography (EEG). We observed significantly enhanced neural responses (N1 and P2 peaks) to target speech with NR activated, suggesting improved speech tracking in frontal and central scalp regions. The advanced signal processing method also compensated for nonlinearities in EEG data, improving the signal-to-noise ratio (SNR) and revealing more precise neural tracking, particularly in the left-frontal scalp region. This study sheds light on the neural mechanisms behind NR benefits and introduces a promising EEG analysis method sensitive to NR effects, paving the way for optimizing future HAs.

Introduction

Electroencephalography (EEG)-based hearing research has grown rapidly in recent years. This focus is well-founded, considering hearing loss ranks as the third most prevalent chronic health condition among the elderly population (Meneses-Barriviera et al., 2013). Studies have shown that hearing loss greatly affects social interactions and even contributes to depression (Boi et al., 2012; Cosh et al., 2019; Lawrence et al., 2019). While HAs demonstrably improve the quality of life for hearing-impaired people (Cohen et al., 2004; Chisolm et al., 2007; Tsakiropoulou et al., 2007; Lotfi et al., 2009; Lacerda et al., 2012), challenges remain, especially in noisy listening environments. Modern HAs partially address this challenge by incorporating noise reduction (NR) algorithms that aim to attenuate unwanted background noise (Fiedler et al., 2021).

Another major challenge for HAs is the “cocktail party problem” (Cherry, 1953): the ability to selectively amplify the attended stimuli while suppressing unattended stimuli. Evidence suggests this ability deteriorates with hearing loss (Bronkhorst, 2000; Marrone et al., 2008; Mcdermott, 2009). Consequently, HA users often experience both a louder environment due to sound amplification, and increased difficulty distinguishing between attended and unattended speakers compared to those with normal hearing.

This study investigates two crucial factors affecting neural tracking of relevant speech: the influence of NR algorithms on enhancing attention to relevant speech and reducing interference from irrelevant environmental noise, and the impact of physiological noise in EEG data. We assessed these factors in participants with hearing impairment who were instructed to attend to one of two simultaneously presented speech streams (target) while ignoring the other (masker), and additional environmental noise. Environmental “noise” refers to unwanted background sounds that NR algorithms in HAs aim to diminish. However, the term “noise” also applies to the inherent limitations of EEG data. Here, physiological noise refers to electrical activity unrelated to auditory processes that contaminate the signal recorded by the EEG electrodes on the scalp. This type of noise results in a low SNR and requires specific processing techniques for effective analysis.

Neural tracking can be assessed using temporal response functions (TRFs), which are linear filters estimated at the sensor level in response to a stimulus such as continuous speech (Alickovic et al., 2019; Di Liberto et al., 2020; Brodbeck et al., 2023). Unlike event-related potentials (ERPs), TRFs are not time-locked to specific events. Yet, their peaks align with the similar stages of the speech processing observed in ERPs, revealing how different aspects of the stimulus drive neural activity over time (Crosse et al., 2016). These responses to attended stimulus typically exhibit three main components: P1 around 50 − 70 ms (related to early detection of auditory stimuli), N1 around 80 − 120 ms and P2 around 150 − 275 ms (higher-order auditory processing, modulated by attention) (Key et al., 2005; Lightfoot, 2016; Brodbeck and Simon, 2020; Smith and Ferguson, 2024).

Prior research has used a backward TRF model to evaluate the effect of different NR algorithms on neural tracking of relevant speech (Alickovic et al., 2020, 2021; Andersen et al., 2021). Results found that active NR enhanced the neural representation of relevant speech while suppressing the representation of background noise. However, these models are limited by their anti-causal nature, making it difficult to interpret their results in terms of temporal and spatial dynamics. To overcome this limitation, the first contribution of this study is to use a forward TRF model to predict EEG activity from the speech stimuli. This approach has not been previously investigated using data collected under different NR settings. The effectiveness of the NR algorithms is evaluated using established metrics, which constitutes the first contribution of this study: amplitude and latency of the N1 and P2 peaks (reflecting speech processing) and TRF variance (consistency of EEG responses).

The second contribution of this study involves investigating noisy EEG data using a binning-based nonlinearity detection and compensation method presented in Wilroth et al. (2024). Applied to EEG data from 30 participants (Andersen et al., 2021), our results replicated compensation patterns observed previously in one-subject analysis (Wilroth et al., 2024), particularly in the left-frontal scalp region. Finally, an SNR analysis of the EEG data showed a significant improvement after the nonlinearity compensation.

Materials and Methods

Experimental dataset

EEG data were recorded from 32 subjects (M = 19) with mild-to-moderate hearing loss using a 64-channel BioSemi ActiveTwo system at a sampling rate of 1024 Hz. Due to incomplete data, two subjects were excluded from our analysis. The experiment took place in a soundproof room. The subjects sat facing two loudspeakers positioned in front of them (at ±30°), playing different Danish news clips of neutral content. In each trial, the subject was instructed to focus on one loudspeaker and ignore the other. Four additional loudspeakers were located behind the subject, playing background noise consisting of 16-talker babble at an SNR of +3 dB. Each trial started with 5 s of background noise, followed by 33 s of news clips and concluded with a two-choice question regarding the content of the attended speech. Subjects performed 40 trials for each hearing aid condition, with NR either activated (NRon) or deactivated (NRoff), leading to a total of 80 trials per subject. This dataset has been used for various analyses, and a complete description of the experimental setup can be found in Andersen et al. (2021). The study was reviewed and approved by the Swedish Ethics Review Authority (DNR: 2022-05129-01).

The two speech streams (target, masker) and the two hearing aid noise reduction settings (on, off) result in four different analysis conditions. These will henceforth be denoted “T-NRon ,” “M-NRon ,” “T-NRoff ,” and “M-NRoff .”

Data preprocessing

To preprocess the speech streams, we first extracted the envelopes using the absolute value of the Hilbert transform. We applied a 6th order Butterworth filter for band-pass filtering between 1 − 8 Hz, followed by downsampling to 100 Hz. To mitigate edge effects, we removed the first and last seconds of the data. A delay of 49.1 ms was observed between the actual stimuli presented over the audio signal path (RME stimulus soundcard → ProTools → Dante system → Speaker 2.2m from microphone → DPA microphone → RME recording soundcard) and event markers fed directly into the EEG recordings (RME stimulus soundcard → RME recording soundcard). This delay was accounted for in the analysis.

For EEG data preprocessing, we re-referenced the data to the two mastoid channels. Next, we filtered out the 50 Hz line noise and applied the 6th-order Butterworth filter for bandpass filtering at 0.5 − 70 Hz. Channels with artifacts (on average 19 per subject) were identified through manual inspection and compensated for by interpolating neighboring channels. Independent component analysis was performed to identify and remove artifacts such as eye blinks, heart beats and muscle movements (Oostenveld et al., 2011). On average, 17.2 components per subject were removed. Final preprocessing steps included a bandpass filter of 1 − 8 Hz using a 6th order Butterworth filter, downsampling to 100 Hz and matching data to the speech envelopes. More detailed information can be found in Andersen et al. (2021) and Alickovic et al. (2020, 2021).

Workflow overview

The workflow used in this article, presented in Figure 1, is divided into six steps:

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Workflow overview. The steps are as follows: (1) Calculate the SNR of the original (measured) EEG data Yorig. (2) Estimate the TRFs βorig from original EEG data Yorig and the speech envelopes XT (target) and XM (masker). (3) Compute the predicted original EEG data Y^orig from the TRFs βorig and the speech envelopes XT and XM. (4) Compute the compensated EEG data Ycomp from Yorig and Y^orig with the nonlinearity detection and compensation method. (5) Compute the SNR of the compensated EEG data Ycomp. (6) Estimate the TRFs βcomp from compensated EEG data Ycomp and the speech envelopes XT and XM.

  1. Compute the SNR of the measured EEG data Yorig.

  2. Estimate the TRFs βorig from original EEG data Yorig and the speech envelopes XT (target) and XM (masker).

  3. Compute the predicted original EEG data Y^orig from the TRFs βorig and the speech envelopes XT and XM.

  4. Compute the compensated EEG data Ycomp from Yorig and Y^orig with the nonlinearity detection and compensation method.

  5. Compute the SNR of the compensated EEG data Ycomp.

  6. Compute the TRFs βcomp from compensated EEG data Ycomp and the speech envelopes XT and XM.

All notation is collected in Table 1.

View this table:
  • View inline
  • View popup
Table 1.

Table of Notations

Temporal response functions (TRFs)

A forward linear model, also known as an encoding model or a TRF model (Alickovic et al., 2019), is used to predict the EEG response to a specific speech stimulus. The model estimates the relationship between a known stimulus input, x(t) (e.g., speech envelope), and measured output y(t, n) (e.g., EEG data), at each time point t (from 1 to T) and for each channel n (from 1 to N). The TRF model is given byy(t,n)=∑τβ(τ,n)x(t−τ)+ϵ(t,n), where β(τ, n) is the channel-specific TRF across a range of time lags τ, and ϵ(t,n) is the channel-specific residual response not accounted for by the model. The K number of time lags, τ = [ − 100, 400] ms and sampling frequency fs, are chosen to include the N1 and P2 components.

A common approach to estimate the linear filters is to use a coordinate descent technique referred to as the boosting algorithm (David et al., 2007; Brodbeck et al., 2023). This sparse estimation method starts by initializing the TRF at each channel with zeros. During each iteration, small fixed values are incrementally added until the mean square error (MSE) no longer decreases. The incremental additions to the TRF are derived from a dictionary of basis elements, such as Hamming windows (Kulasingham and Simon, 2023). We use the boosting algorithm implemented in the Python toolkit Eelbrain (Brodbeck et al., 2023) for the estimation of TRFs. Subsequently, the predicted EEG y^(t,n) for the channel n is given as follows:y^(t,n)=∑τβ(τ,n)x(t−τ), and can be estimated using the convolve function available in the Eelbrain toolkit (Brodbeck et al., 2023).

The TRF model 1 can analogously be parameterized and solved for multiple channels simultaneously using matrix notationY=βX+E, where Y∈RN×T , β∈RN×K , X∈RK×T and E∈RN×T .

Binning-based nonlinear compensation

Figure 2 illustrates our binning-based nonlinear compensation method. We first compute TRFs using the boosting algorithm with actual EEG data Yorig and speech envelopes X. Predicted EEG data Y^orig is then computed using Equation 2, which takes the TRF model and speech envelopes as input. The top left subplot of Figure 2 shows all EEG samples at time t for subject s, trial i and channel n. The coordinate system is given by (y^(t,n),y(t,n)) . Here, the 20 smallest and largest predicted samples are considered outliers and excluded from the further analysis. The remaining samples are then divided into three equally spaced bins. Next, the average of the measured EEG in each bin is calculated. A line is then fitted between the average values in the two outer bins, as shown in the top right subplot of Figure 2. The nonlinear compensation is applied to all samples within the middle bin (bottom left) such that the new average of the middle bin falls on the fitted line (bottom right). This process is repeated for all channels, trials and subjects, resulting in a compensated EEG dataset denoted Ycomp. A more detailed explanation of the algorithm can be found in Wilroth et al. (2024).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Illustration of our binning-based nonlinearity detection and compensation method applied to channel Cz. Top-left: Each blue dot represent the EEG sample at time t in the coordinate system (predicted EEG, measured EEG)=(y^(t,n),y(t,n)) . The 20 smallest and largest predicted samples are considered outliers and excluded from the further analysis. The remaining samples are then divided into three equally spaced bins. Top-right: The average value in each bin is computed and a line is fitted between the two values in the outer bins. Bottom-left: All samples in the middle bin (red) are shifted such that the new middle-bin average falls on the fitted line. Bottom-right: This adjustment results in a more accurate representation of the relationship between the predicted and measured EEG data.

Nonlinearity analysis

The difference between the original and the compensated mean values in the middle bin (Fig. 2, bottom right) represents the residual resulting from the assumed linear trend. Figure 2 illustrates a scenario where the original mean is higher than the compensated mean. This is categorized as a positive, or concave, nonlinearity. Conversely, a scenario where the original mean is lower than the compensated mean signifies a negative, or convex, nonlinearity. These residuals serve as a measure of the nonlinearities detected in the EEG data. In this study, we focus on the magnitude of the nonlinearities, leaving the exploration of specific concave and convex patterns of the nonlinearities for future work.

We obtained a channel-specific measure of the nonlinearity by averaging across all subjects and trials for each of the four experimental conditions. Visualization through topoplots revealed interesting differences between the four conditions. We conducted paired t-tests to determine statistically significant differences in nonlinearity magnitude between corresponding channels across the four conditions. Here, the paired observations are represented by the difference in nonlinearity values between the corresponding channels. To account for multiple comparisons and reduce the risk of false positives, we applied the Bonferroni correction, see e.g., Haynes (2013). This involved multiplying each p-value by the total number of channels N. Channels with corrected p-values lower than the significance level (α = 0.05) are considered to have statistically significant differences in the magnitude of nonlinearity. The significance level of α = 0.05 is used in all our statistical tests, unless otherwise stated.

Evaluation methods

While the grand average TRF computed for each condition by averaging the TRFs across all subjects, trials, and channels, is a common approach, it can mask important spatial information by neglecting the contribution of individual channels. To address this, we also analyzed TRFs for six anatomically distinct channel clusters based on their location on the scalp (illustrated in Fig. 3): “left temporal,” “frontal,” “right temporal,” “central,” “parietal” and “occipital.” Most analyses in this study involve averaging the results across channels within each group.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Illustration of the EEG channels included in each of the six channel groups denoted: left temporal, frontal, right temporal, central, parietal and occipital.

Two evaluation methods are employed to assess the performance of our model and the impact of NR: SNR analysis (see SNR analysis section) and TRF analysis, which includes the N1 and P2 components, variance, and statistical significance compared to a noise TRF (see TRF analysis section).

SNR analysis

EEG data is inherently noisy, and improving the SNR could potentially lead to more accurate models. Therefore, we analyzed the difference in SNR between compensated and original EEG data. A positive difference indicates an improvement in SNR after applying the compensation method. To calculate the SNR difference, we first computed noise level in the EEG data. A noise vector of the same size as the EEG data was initialized, squared element-wise, and averaged over the number of samples, denoted κ in 4. Similarly, the signal power for both the original and compensated EEG data was calculated. We achieved this by taking the absolute values of the original (Yorig) and compensated (Ycomp) EEG signals, squaring them element-wise, and averaging. Finally, the difference SNRdiff was calculated as follows:SNRdiff=10logYcompκ−10logYorigκ.

TRF analysis

We analyzed the TRFs averaged across 30 subjects and all trials for each of the four conditions (T-NRon, M-NRon, T-NRoff and M-NRoff). First, we compared the grand average TRFs. Then, we evaluated the averaged TRFs within each channel group, as defined in Figure 3. The evaluation focused on three aspects of TRFs:

  1. Amplitude and latency of the N1 and P2 peaks: A larger amplitude indicates a stronger neural response, while a shorter latency suggests a faster response. Typically, the N1 peak occurs between 80 − 120 ms and the P2 peak occurs between 150 and 275 ms after stimulus onset (Key et al., 2005; Smith and Ferguson, 2024).

  2. Variance: Ideally, the variance of the TRFs should be low, indicating a consistent response across the trials. We compared the variance of the original and the compensated TRFs for each channel group to assess the impact of compensation on response consistency.

  3. Comparison to noise level: To ensure the observed effects were not due to noise, we compared each of the four conditions with a noise TRF. The TRFnoise were computed by mismatching the target speech trial with the EEG speech trial and the evaluation method was a permutation test as described in the following section.

Permutation test based on cluster statistics

Permutation tests, implemented in the FieldTrip toolbox (Oostenveld et al., 2011; Petersen et al., 2016; FieldTrip, 2024), were conducted between the TRFnoise condition and each of the four conditions (T-NRon, M-NRon, T-NRoff, and M-NRoff). This analysis was performed on both the grand average TRFs and the TRFs from each of the six channel groups. It was performed accordingly:

  1. For each time lag and channel, dependent-samples t-tests were carried out between TRFnoise and each of the four conditions. This resulted in a matrix of t-values.

  2. Next, adjacent time samples and a set number of neighboring channels with p-values, based on the obtained t-values, less than 0.05 were grouped together to form clusters. In the case of grand average TRFs, all channels were used in the dependent-samples t-tests, and at least three neighboring channels were required to form a cluster. For the channel-group analysis, the channels within each group were used, and at least one neighboring channel was needed to form a cluster.

  3. The sum of the single-sample t-values within each cluster were computed and compared to the sum of t-values from permuted clusters. The permuted clusters were generated by randomly assigning time electrode samples to one of the two compared conditions in 1,000 iterations (Maris and Oostenveld, 2007).

  4. To be considered a significant cluster, the sum of its t-values needs to be greater than the 95% percentile of the permutation distribution. This corresponds to a one-sided p-value < 0.05.

Behavioral performances

The responses to the two-choice questions presented to subjects after each trial reflect how well they followed the target speech. For these two-choice questions (c = 2), the theoretical chance level is 50%. The percentage of correct answers was computed as both as a trial-averaged individual accuracy and a grand average for both NR conditions. The empirical chance level (statistically significant threshold) for a sample size of n = 30 subjects, at α = 0.05, was calculated in MATLAB St(α)=binoinv(1−α,n,1/c)×100/n=66.33% , as used in previous studies (Combrisson and Jerbi, 2015; Alickovic et al., 2021). To assess the effect of NR on behavioral performance, a two-sample paired t-test was performed between the grand averages of NRoff and NRon conditions.

Code accessibility and data availability

There are ethical restrictions on sharing the data set. The consent given by participants at the outset of this study did not explicitly detail sharing of the data in any format; this limitation is keeping with EU General Data Protection Regulation, and is imposed by the Research Ethics Committees of the Capital Region in the country it was collected. Due to this regulation, and the way data were collected with a low number of participants where it is not possible to fully anonymize the dataset, it is not possible for us to share the dataset. Data requests can be sent to a non-author contact point (Contact for data requests: Claus Nielsen, clni{at}eriksholm.com.).

Code

Download Code, ZIP file.

The TRF data and code to replicate Figures 4⇓⇓⇓⇓⇓–10 in this study, and the script to run the binning-based nonlinearity compensation method is available in a GitHub repository (GitHub repository: https://github.com/JohannaWil/NonlinearCompensationEEG) and as Extended Data.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Top: Grand average TRFs across 30 subjects and 64 EEG channels for target and masker envelopes with NR algorithms either switched on or off. The variance of the TRFs is presented as a lighter shade of the corresponding color. Horizontal lines indicate time intervals where the TRF from each condition significantly differs from TRFnoise , obtained from the cluster-based permutation test from the latency t = 0 s(p < 0.05). Bottom: Topoplots from T-NRon for the N1 peak at t = 0.127 s and P2 peak at t = 0.22 s. EEG channels marked with asterisk (*) show statistically significant differences compared to TRFnoise .

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Grand average TRFs across 30 participants and the channel groups (see Fig. 3). The noise reduction (NR) algorithms were either switched on or off. Horizontal lines indicate a significant difference between the TRF from each condition and a TRFnoise , obtained from the cluster-based permutation test from the latency t = 0 s (p < α). A, Grand average TRFs for the four conditions T-NRon, M-NRon, T-NRoff, and M-NRoff. The N1 and P2 peaks are visible for the target speech in all channel groups, with the largest amplitudes observed in the frontal and central EEG channels. B, Grand average TRFs for the masker speech envelope for NRon and NRoff, along with the TRFnoise . M-NRon shows more significant parts of the TRF compared to M-NRoff, indicating that NR affects the processing of masker speech.

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Analysis of the N1 (most negative deflection within t = [0.08, 0.12] s) and P2 (most positive deflection within t = [0.18, 0.25] s) peaks obtained from the TRFs presented in Figure 5A. The four conditions (T-NRon, M-NRon, T-NRoff, M-NRoff) across six scalp regions are evaluated. Top: The amplitude of the peaks. Bottom: The latency of the peaks.

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Measure of nonlinearity obtained from the middle bin residuals in the nonlinearity detection and compensation method. B, Average residuals (compensation) [μV] over 30 participants for target and masker envelopes with NRon (top) and NRoff (bottom). The largest compensation for each condition was obtained with the target speech, and NRon acquired a larger compensation compared to NRoff. (A) t-test for statistically significance (p < α), between the combinations:T-NRon-M-NRon (difference between target and masker), T-NRon-T-NRoff and M-NRon-M-NRoff (difference between NRon and NRoff). Yellow channels (value 1): statistically significant and blue channels (value 0): not statistically significant.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

TRF variance for target and masker speech envelopes. Solid line represents original EEG data, and the dashed line represents compensated EEG data. Variance amplitudes differ between target and masker speech streams, as indicated on the y-axis. A, TRF variance for the target envelope. The largest variances align with the N1 and P2 peaks around 120 ms and 220 ms, respectively, where a large peak amplitude (Fig. 6) results in a large peak variance. The compensated EEG data show reduced TRFs variance (dashed lines) for both NRon and NRoff conditions across all channel groups. B, TRF variance for the masker envelope. The variance amplitude is lower compared to the target envelope, consistent with the TRF amplitudes in Figure 6. The largest variances are observed in the frontal and central channel groups at the P2 peak.

Figure 9.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 9.

SNR differences between the compensated and the original EEG data for the four conditions (T-NRon, M-NRon, T-NRoff, M-NRoff) across six channel groups (Fig. 3). A positive difference indicates an improvement in SNR after applying the nonlinearity compensation method. A Bonferroni corrected t-test revealed that most SNR differences (marked with a red cross) for T-NRon, M-NRon, and T-NRoff were statistically significant (p < α).

Figure 10.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 10.

Behavioral performance for the conditions NRoff and NRon. The black dot and error bar for each condition represents the grand average ± standard deviation over all subjects and all trials. A paired-sample t-test revealed a significant difference (p = 0.003) between the two conditions, indicating that NR activated improves behavioral performance. Each gray dot represents the trial-average individual percentage of correct answers in the two-choice question. The diagonal lines connect the two points from the same subject under the two NR conditions, and the colors represent an increased (green) or a decreased (red) accuracy when NR was activated. The gray-dashed line represents the threshold for statistical significance (63.33%) .

Results

TRF analysis

The grand average TRFs for all participants (n = 30), trials and channels, computed using original EEG data Yorig are presented in Figure 4. The data is separated into four conditions based on target and masker speech, with NR either on or off, denoted as T-NRon, M-NRon, T-NRoff, and M-NRoff. Horizontal lines mark significant differences (p < α) between the TRFs of each condition and a noise-based TRF (denoted TRFnoise ), obtained through the cluster-based permutation test from the latency t = 0 s. The significant time intervals include [0–0.17, 0.18–0.44] s (T-NRon), [0–0.08, 0.24–0.33, 0.36–0.49] s (M-NRon), [0–0.06, 0.08–0.16, 0.18–0.28, 0.29–0.42] s (T-NRoff) and [0.25–0.33] s (M-NRoff). The bottom part of the figure shows topoplots of the N1 peak at t = 0.12 s and P2 peak at t = 0.22 s for the T-NRon condition. The channels marked with an asterisk (*) are statistically different from the corresponding TRFnoise channels (p < α), showing widespread activity across the scalp.

To investigate how TRFs vary across scalp regions, we analyzed TRFs averaged over 6 channel groups: frontal, left temporal, right temporal, central, parietal, and occipital (presented in Fig. 3). Figure 5A shows the average TRFs for each group. Horizontal lines indicate significant difference between the TRF from each condition and TRFnoise , obtained from the cluster-based permutation test from the latency t = {0 s(p < α). Significant differences were observed for the target speech in the time intervals of [0–0.06, 0.09–0.16, 0.18–0.27, 0.33–0.38] s for the frontal and central groups, and in the intervals of [0.10–0.16, 0.19–0.26, 0.29–0.39] s for the parietal group. Notably, for T-NRon and T-NRoff, the intervals of [0.12–0.14] s and [0.19–0.25] s showed overlap across all groups except occipital, highlighting consistent patterns in TRF variations in response to the conditions.

Figure 5B compares the masker speech TRFs for the two NR conditions against TRFnoise , generated by mismatching the trials for the EEG data with the trials for the target speech. The results show that the peaks for both NR conditions are larger compared to the noise level across all channel groups with the NRon condition yielding larger responses than NRoff. In particular, the frontal and central channel groups show distinct N1 and P2 peaks above the noise level. Cluster-based permutation tests revealed more significant differences for M-NRon compared to M-NRoff.

Following the TRF analysis (Fig. 6), we further analyzed the N1 (most negative deflection within t = [0.08, 0.12] s) and P2 (most positive deflection within t = [0.18, 0.25] s) peaks for each condition across different scalp regions (channel groups). The top panels show peak amplitudes, with the largest target speech responses observed in the frontal and central regions, consistent with Figure 4. Both N1 and P2 amplitudes were reduced in the NRoff conditions, suggesting that the NR algorithms influences these responses.

The bottom panels of Figure 6 show N1 and P2 peak latencies. For N1 (bottom left), the latency is notably shorter in the left temporal channel region for M-NRon and T-NRoff conditions compared to T-NRon, with a t-test comparing the left temporal latency of M-NRon and T-NRoff with the left temporal latency of T-NRon resulting in a p-value of 0.0489. For P2 (bottom right), the latency is longer for target speech in the left and right temporal regions compared to other scalp regions (p-value 0.0182). No significant latency differences were found between the two NR conditions for masker speech.

Nonlinearity compensation

The difference between the original (blue) and the compensated (red) mean values in the middle bin (Fig. 2, bottom right) represents the residual resulting from the assumed linear trend. The magnitude of this residual (compensation) can be viewed as a measure of the nonlinearities detected in the EEG data. The average nonlinearity compensations [μV] for 30 participants in four conditions—target and masker speech with NR either on or off, denoted as T-NRon, M-NRon, T-NRoff and M-NRoff—are presented in Figure 7A. The most significant compensation is observed in the left-frontal scalp region for the condition T-NRon (top left). In contrast, the M-NRon condition (top right) exhibits a smaller compensation without a clear dominance in the left-frontal area. The two bottom plots show the condition with NRoff, where the magnitude of the averaged nonlinearity compensation is smaller compared to T-NRon (note that the scale between NRon and NRoff are different). However, the pattern for the target speech is similar for both NRon and NRoff, where the largest compensation is located in the left region.

Three specific condition combinations deserve closer analysis: T-NRon-M-NRon (target vs. masker), T-NRon-T-NRoff and M-NRon-M-NRoff (comparing NRon and NRoff). Channel-specific t-tests were conducted for these combinations, with the Bonferroni correction applied to control for multiple comparisons and reduce the risk of false positives. The results are shown in Figure 7B, where yellow channels indicate statistically significant differences between the two evaluated conditions (p < α). A majority of channels showed statistically significant differences when comparing T-NRon condition with both M-NRon (left, 53/64) and T-NRoff (middle, 34/64). However, fewer channels were statistically different when comparing M-NRon and M-NRoff (right, 29/64), primarily in the left scalp region.

Variance analysis

The variance from the TRFs for T-NRon/T-NRoff (top) and M-NRon/M-NRoff (bottom) speech envelopes is displayed in Figure 8. The solid lines indicate the variance from the original EEG data, while the dashed lines indicate the variance for the compensated EEG data. The highest variances were obtained around the N1 and P2 peaks across all channel groups, with the frontal and central channel groups showing higher variance compared to the other groups. This indicates that larger peak amplitudes, as shown in Figure 6, correspond to larger variances. A similar trend is shown when comparing target and masker speeches, with the latter having smaller amplitude variances.

SNR analysis

An SNR analysis of the four conditions (T-NRon, T-NRoff, M-NRon, M-NRoff) and six channel groups is presented in Figure 9. Each bar shows the difference between the SNR from the compensated EEG data (after nonlinearity compensation) and the SNR from the measured EEG data (before nonlinearity compensation). All conditions for all channel groups show a positive SNR difference, indicating an improvement in SNR after applying the nonlinearity compensation method. The red cross signifies that the improvement is statistically significant (p < α) with a Bonferroni corrected t-test. Although large, especially for the left and right temporal channel groups, the SNR differences for M-NRoff were not statistically significant. Hence there is a disparity when switching on the NR, since most channel groups for M-NRon resulted in a statistically significant SNR differences. Left and right temporal channel groups also produced the largest SNR differences for the target speech with both NR scenarios. Here, the larger improvements in these two channel groups for NRoff indicate that the nonlinearity compensation method might have a larger impact on the SNR compared to NRon.

Impact of NR on behavioral performance

The percentage of correct answers on the two-choice questions, as shown in Figure 10, demonstrate that the participants were able to focus on the target speech in both NRon and NRoff conditions. The gray-dashed line marks the statistically significant threshold, calculated to be St(α)=63.33% . The grand average ± standard deviation (%) for each NR condition was 77.33 ± 1.89 (NRoff) and 84.42 ± 2.05 (NRon). A paired-sample t-test revealed a significant difference between the NRoff and NRon conditions, t(29) = −4.06, p = 0.0003 (two-tailed, α = 0.05), indicating that NR activated improved the behavioral performance.

Discussion

Our results highlight two key findings. First, when NR is activated, the TRF analysis in Figure 5A showed stronger neural responses in the frontal and central scalp regions, suggesting improved encoding of target speech. This aligns with previous research on the effects of hearing aid signal processing on neural speech tracking, suggesting that NR improves the neural representation of target speech while reducing the representation of background noise during selective attention tasks (Alickovic et al., 2020, 2021; Andersen et al., 2021). Interestingly, these same scalp regions also exhibited enhanced responses to masker speech when NR was activated, consistent with Alickovic et al. (2020, 2021) and Andersen et al. (2021). This aligns with perceptual load theory (Lavie and Tsal, 1994; Lavie, 1995), which proposes that when attentional resources are plentiful (low load), some processing of irrelevant stimuli (distractors) such as masker speech can occur. This reasoning may also be supported by the cognitive spare capacity, which argue that since the processing of target will use less cognitive resources in case of NRon, there is some cognitive resources left to process the masker (Rudner and Lunner, 2014). In this context, by reducing cognitive load, NR enables the brain to allocate more resources to process even unattended speech, thereby explaining the observed activity for masker speech. In other words, the stronger the masker representation the better, as long as the target representation is the strongest (compared to the masker). For higher cognitive load, on the other hand, a study found a depressed brain stem activity in a visual, letter-based, n-back odd-ball listening task (Sörqvist et al., 2012). This kind of modulation was also revealed in a later study with the same experimental setup (Sörqvist et al., 2016), but now examining cortical activity. The authors found that especially for high cortical load, primary auditory cortical area activity was reduced. Thus, for cross-modal competition between target and masker, masker activity is reduced when load is high. This would not necessarily imply that within-modality competition would follow the same pattern. Individual working memory capacity would presumably determine at which parametric level of distraction that working memory capacity per se enhances both target and masker processing, with high loads shielding the processing of the focal task and spending relatively less processing on the masker.

Second, the nonlinear compensation method introduced in this study indicates that NR and attention have impact on the amount of nonlinear components in the EEG data. Our results yielded the largest compensation for T-NRon in the left-frontal region, indicating that nonlinearities were most prominent for electrodes in this specific scalp area. Notably, the NR and subsequent nonlinear compensation led to an overall improvement in the SNR of the EEG data.

TRF analysis

The grand average TRFs (Fig. 4) reveal three characteristic neural components (P1, N1, and P2), typically observed in response to auditory stimulus, with latencies corresponding to previous studies. In both target conditions (T-NRon and T-NRoff), the peak latencies presented, as expected, larger amplitudes in the T-NRon condition, suggesting enhanced neural tracking of target speech when NR is activated (Alickovic et al., 2020, 2021; Andersen et al., 2021; Fiedler et al., 2021).

Our findings show that the target speech consistently elicited stronger responses than masker speech, aligning with previous research that demonstrates cortical responses to a mixture of speakers are predominantly driven by target speech (Mesgarani and Chang, 2012; Golumbic et al., 2013; O’sullivan et al., 2015; Fuglsang et al., 2017; Brodbeck et al., 2020; Kurthen et al., 2021; Commuri et al., 2023; Orf et al., 2023; Carta et al., 2024). Across all groups (Fig. 5A), particularly with NR activated, responses to target speech were most pronounced in the frontal and central EEG channels. This aligns with previous studies reporting dominant frontal scalp activity for envelope TRFs (Crosse et al., 2016), centro-frontal enhancements (Fiedler et al., 2019; Carta et al., 2023, 2024), and enhancement in central region during speech-in-noise tasks (Muncke et al., 2022).

Interestingly, masker speech exhibited a prominent positive component (P2 peak) in frontal EEG channels at the latency of around 180 ms for M-NRon, accompanied by a second negative deflection (N2 peak) around 280 ms, observed across all scalp regions except occipital (Fig. 5B). The N2 peak was observed in both NR conditions. This is consistent with prior EEG studies in competing-talker environments, which found that masker and neutral speech elicited smaller, earlier P2-peaks (around 180 ms) and N2-peaks (around 250 ms), while target speech elicited a stronger P2 peak around 200 ms (Orf et al., 2023). This antipolar relationship, where the masker speech shows an N2 deflection at latencies similar to the target P2 peak, was also observed in Fiedler et al. (2019). In their study, the EEG responses showed P1-N1-P2 components for target speech in fronto-central region, while masker speech, especially at difficult SNRs, led to an additional N2 component. While both the target and masker speech streams were presented at the same SPL in our study, the presence of 16-talker babble noise at an SNR of +3 dB created a challenging listening environment that impeded the perception of target speech, likely contributing to the observed N2 component (Fig. 5B). This is consistent with previous studies, such as Fiedler et al. (2019), which have shown that under challenging listening conditions, the brain actively suppresses these irrelevant sounds. However, the scalp distribution of this N2 component differs, as Fiedler et al. (2019) promoted fronto-parietal region, while our findings are statistically significant also in central, left, and right temporal regions.

Furthermore, a recent study (Carta et al., 2024) investigated neural encoding of phonological information (that is, phonological categories and phonetic boundaries) in HI listeners during a competing-talker scenario using an experimental setup similar to ours, demonstrating stronger neural responses to target speech compared to masker speech, consistent with our findings. However, this study also showed significant neural encoding of phoneme onsets for both target and masker speech, suggesting that HI individuals may be more susceptible to distractions by irrelevant sounds due to increased processing of masker speech details. Suggesting that their difficulty in focusing on a specific speaker may arise from an overly robust neural encoding of phonological details for both attended and ignored sounds. Given that normal-hearing individuals typically do not process masker speech envelope at a linguistic level (Aydelott et al., 2015; Brodbeck et al., 2018), future studies should investigate how different NR algorithms and hearing aid settings influence the higher-order processing of masker speech (e.g., envelopes, phonemes, words). These findings can inform the development of objective measures of speech comprehension that can be used in clinical settings to assess hearing and in HAs to measure speech understanding.

In summary, the N1 and P2 peak amplitudes and latencies (Fig. 6) indicate that the amplitudes for both peaks were larger for the target speech compared to the masker speech, consistent with previous studies (Mesgarani and Chang, 2012; Golumbic et al., 2013; O’sullivan et al., 2015; Fuglsang et al., 2017; Brodbeck et al., 2020; Kurthen et al., 2021; Commuri et al., 2023; Orf et al., 2023), and an amplitude difference was observed between the two NR conditions, with larger responses for NRon than NRoff, indicating a positive effect of the noise reduction algorithm.

Nonlinearity compensation

The average nonlinearity compensations were assessed (Fig. 7A), revealing the largest compensation in the left-frontal scalp region, particularly in channels AF3, F1, and F3 during the T-NRon condition. This finding is noteworthy, as these channels are positioned over the left side of the prefrontal cortex (PFC), which is critical for executive functions such as working memory, attention control, and decision-making (Fuster, 2015). In contrast, the M-NRon condition exhibited a smaller compensation without a clear dominance in the left frontal area. Similarly, the NRoff, condition exhibited smaller compensations compared to T-NRon, though T-NRoff also showed the largest compensation in the left region.

Statistical analysis (Fig. 7B) revealed significant channel differences in nonlinearity compensations between the target and masker speech in most channels, including the left frontal channels that could be associated with working memory functions (Meyer et al., 2015; Gehrig et al., 2019; Signoret et al., 2020). Comparing the two NR conditions for the target speech revealed a similar pattern of nonlinearity compensation, though with a more left-central distribution and smaller amplitudes in T-NRoff condition. Notably, the channels F1 and F3, near the left PCF, did not show significant differences between the two NR conditions. For the masker speech, both conditions (M-NRon and M-NRoff) showed the greatest compensation in the left-frontal scalp region, with a statistical difference between the two conditions for channels AF3 and F1.

Interestingly, the largest compensation–indicating the greatest magnitude of nonlinearites in the EEG data–was observed for the T-NRon condition, where the sound clarity was most favorable for the attention task. This finding aligns with the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013, 2019), which posits that listeners allocate additional cognitive resources to process less clear signals. With the clearer target speech in the NRon condition (Fig. 4), attending the target speech required less effort. In contrast, the (NRoff and masker speech) involved more signal degradation, making it harder for the incoming signal to match the incoming signals with the phonological representations in semantic long-term memory. This mismatch necessitates a postdictive mechanism in working memory that compensates for the degradation and reconstructs the degraded signal into a coherent representation for the listener. This process is slow and effortful, likely leading to increased fatigue (Pichora-Fuller et al., 2016; Rönnberg et al., 2019, 2022). This is reflected in the TRFs (Fig. 4) for the T-NRoff condition, which has a lower amplitude compared to T-NRon. Our results (Fig. 7A) also show that T-NRoff has a smaller and more central compensation compared to T-NRon. Likewise, the masker speech for both NR conditions has smaller TRF amplitudes and nonlinear compensations.

The ELU model is further supported by behavioral data (Fig. 10) indicating how well the participants understood the target speech. Participants reported a better performance when NR was enabled (NRon) compared to when it was disabled (NRoff), indicating that NR facilitated the processing of the target speech. When NR was disabled (NRoff), participants likely faced greater challenges attending to the intended auditory information, which could have increased listening effort and fatigue. This interpretation aligns with the ELU model, indicating that NRon improves behavioral performance. Furthermore, existing research on effectiveness of NR schemes in HAs supports these findings (Lunner et al., 2020), with studies suggesting that NR can positively influence neural representation of speech stimuli (Alickovic et al., 2020, 2021; Andersen et al., 2021), improve memory for target speech (Ng et al., 2015), and reduce listening effort and fatigue (Wendt et al., 2017; Fiedler et al., 2021; Shahsavari Baboukani et al., 2022). Notably, the observed improvement in the behavioral performances with NRon across most trials suggests the potential for using neural data to predict behavioral responses on a trial-by-trial basis. Although this was not investigated in the present study, it could be an interesting direction for future research.

Variance and SNR analysis

The analysis of EEG signals, characterized by weak amplitudes and inherent nonlinearity (Puthankattil et al., 2010; Crosse et al., 2016; Mesik and Wojtczak, 2023), revealed significant insights into the effectiveness of nonlinearity compensation methods. Our findings indicated that applying this method improved the signal-to-noise ratio (SNR) across all conditions, as evidenced by lower variance in the compensated data (Fig. 8) and positive SNR differences (Fig. 9), particularly in the left and right temporal regions, which initially exhibited the lowest SNR. This improvement helps support previous research that targeted optimizing or enhancing SNR in EEG data (James et al., 1997; Hauk et al., 2002; Moran and Soriano, 2018). The M-NRoff condition did not show any SNR enhancement, suggesting that the original EEG signals were too disrupted to benefit from compensation. This highlights the critical role of maintaining signal clarity for effective auditory processing and underscores the challenges inherent in analyzing EEG data under less-than-ideal conditions.

Concluding remarks

This study investigated two dimensions of noise: “environmental” noise related to background sounds and “physiological” noise referring to electrical activity not related to auditory processes that contaminates the signal recorded by EEG sensors on the scalp. We compared how different levels of environmental noise, manipulated through different NR settings in HAs (NRoff vs NRon), influenced a selective auditory attention task. TRF analysis showed larger N1 and P2 peaks when NR was activated, indicating that the NR algorithm effectively reduced environmental noise, with the most prominent effects observed in the frontal and central channel groups.

The physiological noise aspect was addressed using the nonlinearity detection and compensation method. This approach addressed some of the nonlinear components in the measured EEG data, enhancing the SNR and reducing variance. The improvements observed across most channel groups in the T-NRon, M-NRon and T-NRoff conditions indicate that compensating for physiological noise can lead to clearer and more reliable EEG signals. The most evident compensation, which indicates increased nonlinearity activity, was concentrated in the left frontal scalp region, reflecting increased nonlinearity activity. Furthermore, the observed variability in compensation between the two NR settings (NRoff vs NRon) underscores the importance of activating NR for optimizing auditory attention tasks in challenging listening environments. Overall, these findings help us understand how different types of noise affect auditory processing and point to ways to improve hearing aid technologies for real-world listening situations.

Footnotes

  • The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: The commercial affiliation of authors EA and MAS does not alter our adherence to eNeuro policies on sharing data and materials.

  • We would like to thank Oskar Keding and professor Maria Sandsten at the Centre for Mathematical Sciences at Lunds Tekniska Högskola, for their valuable feedback and discussions. Financial support from the Excellence Center at Linköping – Lund in Information Technology (ELLIIT), project Brain-Based Monitoring of Sound.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    1. Alickovic E,
    2. Lunner T,
    3. Gustafsson F,
    4. Ljung L
    (2019) A tutorial on auditory attention identification methods. Front Neurosci 13:153. https://doi.org/10.3389/fnins.2019.00153 pmid:30941002
    OpenUrlCrossRefPubMed
  2. ↵
    1. Alickovic E,
    2. Lunner T,
    3. Wendt D,
    4. Fiedler L,
    5. Hietkamp R,
    6. Graversen C
    (2020) Neural representation enhanced for speech and reduced for background noise with a hearing aid noise reduction scheme during a selective attention task. Front Neurosci 14:846. https://doi.org/10.3389/fnins.2020.00846 pmid:33071722
    OpenUrlCrossRefPubMed
  3. ↵
    1. Alickovic E,
    2. Ng E,
    3. Ng N,
    4. Fiedler L,
    5. Santurette S,
    6. Innes-Brown H,
    7. Graversen C
    (2021) Effects of hearing aid noise reduction on early and late cortical representations of competing talkers in noise. Front Neurosci 15:636060. https://doi.org/10.3389/fnins.2021.636060 pmid:33841081
    OpenUrlPubMed
  4. ↵
    1. Andersen A,
    2. Santurette S,
    3. Pedersen M,
    4. Alickovic E,
    5. Fiedler L,
    6. Jensen J,
    7. Behrens T
    (2021) Creating clarity in noisy environments by using deep learning in hearing aids. Semin Hear 42:260–281. https://doi.org/10.1055/s-0041-1735134 pmid:34594089
    OpenUrlCrossRefPubMed
  5. ↵
    1. Aydelott J,
    2. Jamaluddin Z,
    3. Pearce SN
    (2015) Semantic processing of unattended speech in dichotic listening. J Acoust Soc Am 138:964–975. https://doi.org/10.1121/1.4927410
    OpenUrlCrossRefPubMed
  6. ↵
    1. Boi R,
    2. Racca L,
    3. Cavallero A,
    4. Carpaneto V,
    5. Racca M,
    6. Dall’Acqua F,
    7. Ricchetti M,
    8. Santelli A,
    9. Odetti P
    (2012) Hearing loss and depressive symptoms in elderly patients. Geriat Gerontol Int 12:440–445. https://doi.org/10.1111/j.1447-0594.2011.00789.x
    OpenUrl
  7. ↵
    1. Brodbeck C,
    2. Das P,
    3. Gillis M,
    4. Kulasingham JP,
    5. Bhattasali S,
    6. Gaston P,
    7. Resnik P,
    8. Simon JZ
    (2023) Eelbrain, a python toolkit for time-continuous analysis with temporal response functions. eLife 12:e85012. https://doi.org/10.7554/eLife.85012 pmid:38018501
    OpenUrlCrossRefPubMed
  8. ↵
    1. Brodbeck C,
    2. Hong LE,
    3. Simon JZ
    (2018) Rapid transformation from auditory to linguistic representations of continuous speech. Curr Biol 28:3976–3983. https://doi.org/10.1016/j.cub.2018.10.042 pmid:30503620
    OpenUrlCrossRefPubMed
  9. ↵
    1. Brodbeck C,
    2. Jiao A,
    3. Hong L,
    4. Simon J
    (2020) Neural speech restoration at the cocktail party: auditory cortex recovers masked speech of both attended and ignored speakers. PLoS Biol 18:e3000883. https://doi.org/10.1371/journal.pbio.3000883 pmid:33091003
    OpenUrlCrossRefPubMed
  10. ↵
    1. Brodbeck C,
    2. Simon J
    (2020) Continuous speech processing. Curr Opin Physiol 18:25–31. https://doi.org/10.1016/j.cophys.2020.07.014 pmid:33225119
    OpenUrlCrossRefPubMed
  11. ↵
    1. Bronkhorst A
    (2000) The cocktail party phenomenon: a review of research on speech intelligibility in multiple-talker conditions. Acta Acustica United Acustica 86:117–128.
    OpenUrl
  12. ↵
    1. Carta S,
    2. Alickovic E,
    3. Zaar J,
    4. Valdés A,
    5. Di Liberto G
    (2024) Cortical encoding of phonetic onsets of both attended and ignored speech in hearing impaired individuals. PLoS One 19:e0308554. https://doi.org/10.1371/journal.pone.0308554 pmid:39576775
    OpenUrlCrossRefPubMed
  13. ↵
    1. Carta S,
    2. Mangiacotti AM,
    3. Valdes AL,
    4. Reilly RB,
    5. Franco F,
    6. Di Liberto GM
    (2023) The impact of temporal synchronisation imprecision on TRF analyses. J Neurosci Methods 385:109765. https://doi.org/10.1016/j.jneumeth.2022.109765
    OpenUrlCrossRef
  14. ↵
    1. Cherry EC
    (1953) Some experiments on the recognition of speech, with one and with two ears. J Acoust Soc Am 25:975–979. https://doi.org/10.1121/1.1907229
    OpenUrlCrossRef
  15. ↵
    1. Chisolm TH,
    2. Johnson CE,
    3. Danhauer JL,
    4. Portz LJ,
    5. Abrams HB,
    6. Lesner S,
    7. McCarthy PA,
    8. Newman CW
    (2007) A systematic review of health-related quality of life and hearing aids: final report of the American academy of audiology task force on the health-related quality of life benefits of amplification in adults. Am Acad Audiol 18:151–183. https://doi.org/10.3766/jaaa.18.2.7
    OpenUrl
  16. ↵
    1. Cohen SM,
    2. Labadie RF,
    3. Dietrich MS,
    4. Haynes DS
    (2004) Quality of life in hearing-impaired adults: the role of cochlear implants and hearing aids. Otolaryngol Head Neck Surg 131:413–422. https://doi.org/10.1016/j.otohns.2004.03.026
    OpenUrlCrossRefPubMed
  17. ↵
    1. Combrisson E,
    2. Jerbi K
    (2015) Exceeding chance level by chance: the caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy. J Neurosci Methods 250:126–136. Cutting-edge EEG Methods. https://doi.org/10.1016/j.jneumeth.2015.01.010
    OpenUrlCrossRefPubMed
  18. ↵
    1. Commuri V,
    2. Kulasingham J,
    3. Simon J
    (2023) Cortical responses time-locked to continuous speech in the high-gamma band depend on selective attention. Front Neurosci 17:1–7. https://doi.org/10.3389/fnins.2023.1264453 pmid:38156264
    OpenUrlCrossRefPubMed
  19. ↵
    1. Cosh S,
    2. Helmer C,
    3. Delcourt C,
    4. Robins TG,
    5. Tully PJ
    (2019) Depression in elderly patients with hearing loss: current perspectives. Clin Interv Aging 14:1471–1480. https://doi.org/10.2147/CIA.S195824 pmid:31616138
    OpenUrlPubMed
  20. ↵
    1. Crosse MJ,
    2. Di Liberto GM,
    3. Bednar A,
    4. Lalor EC
    (2016) The multivariate temporal response function (mTRF) toolbox: a MATLAB toolbox for relating neural signals to continuous stimuli. Front Hum Neurosci 10:604. https://doi.org/10.3389/fnhum.2016.00604 pmid:27965557
    OpenUrlCrossRefPubMed
  21. ↵
    1. David S,
    2. Mesgarani N,
    3. Shamma S
    (2007) Estimating sparse spectro-temporal receptive fields with natural stimuli. Network 18:191–212. https://doi.org/10.1080/09548980701609235
    OpenUrlCrossRefPubMed
  22. ↵
    1. Di Liberto GM,
    2. Pelofi C,
    3. Bianco R,
    4. Patel P,
    5. Mehta AD,
    6. Herrero JL,
    7. de Cheveigné A,
    8. Shamma S,
    9. Mesgarani N
    (2020) Cortical encoding of melodic expectations in human temporal cortex. eLife 9:e51784. https://doi.org/10.7554/eLife.51784 pmid:32122465
    OpenUrlCrossRefPubMed
  23. ↵
    1. Fiedler L,
    2. Ala TS,
    3. Graversen C,
    4. Alickovic E,
    5. Lunner T,
    6. Wendt D
    (2021) Hearing aid noise reduction lowers the sustained listening effort during continuous speech in noise—a combined pupillometry and eeg study. Ear Hear 42:1590–1601. https://doi.org/10.1097/AUD.0000000000001050
    OpenUrlPubMed
  24. ↵
    1. Fiedler L,
    2. Wöstmann M,
    3. Herbst SK,
    4. Obleser J
    (2019) Late cortical tracking of ignored speech facilitates neural selectivity in acoustically challenging conditions. NeuroImage 186:33–42. https://doi.org/10.1016/j.neuroimage.2018.10.057
    OpenUrlCrossRefPubMed
  25. ↵
    1. Fuglsang SA,
    2. Dau T,
    3. Hjortkjær J
    (2017) Noise-robust cortical tracking of attended speech in real-world acoustic scenes. NeuroImage 156:435–444. https://doi.org/10.1016/j.neuroimage.2017.04.026
    OpenUrlCrossRefPubMed
  26. ↵
    1. Fuster JM
    (2015) Chapter 2–anatomy of the prefrontal cortex. In: The prefrontal cortex (Fuster JM, ed), Ed 5. pp 9–62. San Diego: Academic Press. ISBN: 978-0-12-407815-4.
  27. ↵
    1. Gehrig J
    , et al. (2019) Low-frequency oscillations code speech during verbal working memory. J Neurosci 39:6498–6512. https://doi.org/10.1523/JNEUROSCI.0018-19.2019 pmid:31196933
    OpenUrlAbstract/FREE Full Text
  28. ↵
    1. Golumbic EMZ, et al.
    (2013) Mechanisms underlying selective neuronal tracking of attended speech at a “cocktail party”. Neuron 77:980–991. https://doi.org/10.1016/j.neuron.2012.12.037 pmid:23473326
    OpenUrlCrossRefPubMed
  29. ↵
    1. Hauk O,
    2. Keil A,
    3. Elbert T,
    4. Müller M
    (2002) Comparison of data transformation procedures to enhance topographical accuracy in time-series analysis of the human EEG. J Neurosci Methods 113:111–122. https://doi.org/10.1016/S0165-0270(01)00484-8
    OpenUrlCrossRefPubMed
  30. ↵
    1. Haynes W
    (2013) Bonferroni correction. In: Encyclopedia of systems biology (Dubitzky W, et al. ed), pp 154–154. New York, NY: Springer. ISBN: 978-1-4419-9863-7.
  31. ↵
    1. James C,
    2. Hagan M,
    3. Jones R,
    4. Bones P,
    5. Carroll G
    (1997) Multireference adaptive noise canceling applied to the EEG. IEEE Trans Biomed Eng 44:775–779. https://doi.org/10.1109/10.605438
    OpenUrlCrossRefPubMed
  32. ↵
    1. Key A,
    2. Dove GO,
    3. Maguire M
    (2005) Linking brainwaves to the brain: an ERP primer. Dev Neuropsychol 27:183–215. https://doi.org/10.1207/s15326942dn2702_1
    OpenUrlCrossRefPubMed
  33. ↵
    1. Kulasingham JP,
    2. Simon JZ
    (2023) Algorithms for estimating time-locked neural response components in cortical processing of continuous speech. IEEE Trans Biomed Eng 70:88–96. https://doi.org/10.1109/TBME.2022.3185005 pmid:35727788
    OpenUrlPubMed
  34. ↵
    1. Kurthen I,
    2. Galbier J,
    3. Jagoda L,
    4. Neuschwander P,
    5. Giroud N,
    6. Meyer M
    (2021) Selective attention modulates neural envelope tracking of informationally masked speech in healthy older adults. Human Brain Mapp 42:3042–3057. https://doi.org/10.1002/hbm.25415 pmid:33783913
    OpenUrlPubMed
  35. ↵
    1. Lacerda CF,
    2. de Tavares Canto RS,
    3. Cheik NC
    (2012) Effects of hearing aids in the balance, quality of life and fear to fall in elderly people with sensorineural hearing loss. Int Arch Otorhinolaryngol 14:156–162. https://doi.org/10.7162/S1809-97772012000200002 pmid:25991930
    OpenUrlPubMed
  36. ↵
    1. Lavie N
    (1995) Perceptual load as a necessary condition for selective attention. J Exp Psychol Hum Percept Perform 21:451. https://doi.org/10.1037/0096-1523.21.3.451
    OpenUrlCrossRefPubMed
  37. ↵
    1. Lavie N,
    2. Tsal Y
    (1994) Perceptual load as a major determinant of the locus of selection in visual attention. Perception Psychophys 56:183–197. https://doi.org/10.3758/BF03213897
    OpenUrlCrossRefPubMed
  38. ↵
    1. Lawrence BJ,
    2. Jayakody DMP,
    3. Bennett RJ,
    4. Eikelboom RH,
    5. Gasson N,
    6. Friedland PL
    (2019) Hearing loss and depression in older adults: a systematic review and meta-analysis. Gerontologist 60:e137–e154. https://doi.org/10.1093/geront/gnz009
    OpenUrl
  39. ↵
    1. Lightfoot G
    (2016) Summary of the N1-P2 cortical auditory evoked potential to estimate the auditory threshold in adults. Semin Hear 31:1–8. https://doi.org/10.1055/s-0035-1570334 pmid:27587918
    OpenUrlPubMed
  40. ↵
    1. Lotfi Y,
    2. Mehrkian S,
    3. Moossavi A,
    4. Faghih-Zadeh S
    (2009) Quality of life improvement in hearing-impaired elderly people after wearing a hearing aid. Arch Iran Med 12:365–70.
    OpenUrlPubMed
  41. ↵
    1. Lunner T,
    2. Alickovic E,
    3. Graversen C,
    4. Ng EHN,
    5. Wendt D,
    6. Keidser G
    (2020) Three new outcome measures that tap into cognitive processes required for real-life communication. Ear Hear 41:39S–47S. https://doi.org/10.1097/AUD.0000000000000941 pmid:33105258
    OpenUrlCrossRefPubMed
  42. ↵
    1. Maris E,
    2. Oostenveld R
    (2007) Nonparametric statistical testing of EEG- and meg-data. J Neurosci Methods 164:177–90. https://doi.org/10.1016/j.jneumeth.2007.03.024
    OpenUrlCrossRefPubMed
  43. ↵
    1. Marrone N,
    2. Mason CR,
    3. Kidd Jr JG
    (2008) Evaluating the benefit of hearing aids in solving the cocktail party problem. Trends Amplif 12:300–315. https://doi.org/10.1177/1084713808325880 pmid:19010794
    OpenUrlCrossRefPubMed
  44. ↵
    1. Mcdermott J
    (2009) The cocktail party problem. Curr Biol 19:R1024–7. https://doi.org/10.1016/j.cub.2009.09.005
    OpenUrlCrossRefPubMed
  45. ↵
    1. Meneses-Barriviera CL,
    2. Melo JJ,
    3. Marchiori LL
    (2013) Hearing loss in the elderly: history of occupational noise exposure. Int Arch otorhinolaryngol 17:179–183. https://doi.org/10.7162/S1809-97772013000200010 pmid:25992010
    OpenUrlPubMed
  46. ↵
    1. Mesgarani N,
    2. Chang EF
    (2012) Selective cortical representation of attended speaker in multi-talker speech perception. Nature 485:233–236. https://doi.org/10.1038/nature11020 pmid:22522927
    OpenUrlCrossRefPubMed
  47. ↵
    1. Mesik J,
    2. Wojtczak M
    (2023) The effects of data quantity on performance of temporal response function analyses of natural speech processing. Front Neurosci 16:1–27. https://doi.org/10.3389/fnins.2022.963629 pmid:36711133
    OpenUrlCrossRefPubMed
  48. ↵
    1. Meyer L,
    2. Grigutsch M,
    3. Schmuck N,
    4. Gaston P,
    5. Friederici AD
    (2015) Frontal–posterior theta oscillations reflect memory retrieval during sentence comprehension. Cortex 71:205–218. https://doi.org/10.1016/j.cortex.2015.06.027
    OpenUrlCrossRefPubMed
  49. ↵
    1. Moran A,
    2. Soriano M
    (2018) Improving the quality of a collective signal in a consumer EEG headset. PLoS One 13:e0197597. https://doi.org/10.1371/journal.pone.0197597 pmid:29795611
    OpenUrlPubMed
  50. ↵
    1. Muncke J,
    2. Kuruvila I,
    3. Hoppe U
    (2022) Prediction of speech intelligibility by means of EEG responses to sentences in noise. Front Neurosci 16:876421. https://doi.org/10.3389/fnins.2022.876421 pmid:35720724
    OpenUrlCrossRefPubMed
  51. ↵
    1. Ng E,
    2. Rudner M,
    3. Lunner T,
    4. Rönnberg J
    (2015) Noise reduction improves memory for target language speech in competing native but not foreign language speech. Ear Hear 36:82–91. https://doi.org/10.1097/AUD.0000000000000080
    OpenUrl
  52. ↵
    1. Oostenveld R,
    2. Fries P,
    3. Maris E,
    4. Schoffelen J-M
    (2011) FieldTrip: open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput Intell Neurosci 2011:156869. https://doi.org/10.1155/2011/156869 pmid:21253357
    OpenUrlCrossRefPubMed
  53. ↵
    1. Orf M,
    2. Wöstmann M,
    3. Hannemann R,
    4. Obleser J
    (2023) Target enhancement but not distractor suppression in auditory neural tracking during continuous speech. iScience 26:106849. https://doi.org/10.1016/j.isci.2023.106849 pmid:37305701
    OpenUrlCrossRefPubMed
  54. ↵
    1. O’sullivan JA,
    2. Power AJ,
    3. Mesgarani N,
    4. Rajaram S,
    5. Foxe JJ,
    6. Shinn-Cunningham BG,
    7. Slaney M,
    8. Shamma SA,
    9. Lalor EC
    (2015) Attentional selection in a cocktail party environment can be decoded from single-trial EEG. Cereb Cortex 25:1697–1706. doi:10.1093/cercor/bht355
    OpenUrlCrossRefPubMed
  55. ↵
    FieldTrip (2024) Parametric and non-parametric statistics on event-related fields. https://www.fieldtriptoolbox.org. [Online; accessed June 6 2024].
  56. ↵
    1. Petersen E,
    2. Wöstmann M,
    3. Obleser J,
    4. Lunner T
    (2016) Neural tracking of attended versus ignored speech is differentially affected by hearing loss. J Neurophysiol 117:jn.00527.2016. https://doi.org/10.1152/jn.00527.2016 pmid:27707813
    OpenUrlCrossRefPubMed
  57. ↵
    1. Pichora-Fuller MK
    , et al. (2016) Hearing impairment and cognitive energy: the framework for understanding effortful listening (fuel). Ear Hear 37:5S–27S. https://doi.org/10.1097/AUD.0000000000000312
    OpenUrlCrossRefPubMed
  58. ↵
    1. Rönnberg J, et al.
    (2013) The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 7:31. https://doi.org/10.3389/fnsys.2013.00031 pmid:23874273
    OpenUrlCrossRefPubMed
  59. ↵
    1. Rönnberg J,
    2. Holmer E,
    3. Rudner M
    (2019) Cognitive hearing science and ease of language understanding. Int J Audiol 58:247–261. https://doi.org/10.1080/14992027.2018.1551631
    OpenUrlCrossRefPubMed
  60. ↵
    1. Rönnberg J,
    2. Signoret C,
    3. Andin J,
    4. Holmer E
    (2022) The cognitive hearing science perspective on perceiving, understanding, and remembering language: The elu model. Front Psychol 13:1–17. https://doi.org/10.3389/fpsyg.2022.967260 pmid:36118435
    OpenUrlCrossRefPubMed
  61. ↵
    1. Rudner M,
    2. Lunner T
    (2014) Cognitive spare capacity and speech communication: a narrative overview. BioMed Res Int 2014:869726. https://doi.org/10.1155/2014/869726 pmid:24971355
    OpenUrlPubMed
  62. ↵
    1. Shahsavari Baboukani P,
    2. Graversen C,
    3. Alickovic E,
    4. Østergaard J
    (2022) Speech to noise ratio improvement induces nonlinear parietal phase synchrony in hearing aid users. Front Neurosci 16:932959. https://doi.org/10.3389/fnins.2022.932959 pmid:36017182
    OpenUrlPubMed
  63. ↵
    1. Signoret C,
    2. Andersen L,
    3. Dahlstrom O,
    4. Blomberg R,
    5. Lundqvist D,
    6. Rudner M,
    7. Rönnberg J
    (2020) The influence of form- and meaning-based predictions on cortical speech processing under challenging listening conditions: a meg study. Front Neurosci 14:573254. https://doi.org/10.3389/fnins.2020.573254 pmid:33100961
    OpenUrlPubMed
  64. ↵
    1. Smith M,
    2. Ferguson HJ
    (2024) Indistinguishable behavioural and neural correlates of perceptual self-other distinction in autistic and neurotypical adults. Cortex 176:242–259. https://doi.org/10.1016/j.cortex.2024.03.012
    OpenUrl
  65. ↵
    1. Sörqvist P,
    2. Dahlstrom O,
    3. Karlsson T,
    4. Rönnberg J
    (2016) Concentration: the neural underpinnings of how cognitive load shields against distraction. Front Human Neurosci 10:221. https://doi.org/10.3389/fnhum.2016.00221 pmid:27242485
    OpenUrlPubMed
  66. ↵
    1. Sörqvist P,
    2. Stenfelt S,
    3. Rönnberg J
    (2012) Working memory capacity and visual–verbal cognitive load modulate auditory–sensory gating in the brainstem: toward a unified view of attention. J Cogn Neurosci 24:2147–2154. https://doi.org/10.1162/jocn_a_00275
    OpenUrlCrossRefPubMed
  67. ↵
    1. Subha DP,
    2. Joseph PK,
    3. Acharya UR,
    4. Lim CM
    (2010) Eeg signal analysis: a survey. J Med Syst 34:195–212. https://doi.org/10.1007/s10916-008-9231-z
    OpenUrlCrossRefPubMed
  68. ↵
    1. Tsakiropoulou E,
    2. Konstantinidis I,
    3. Vital I,
    4. Konstantinidou S,
    5. Kotsani A
    (2007) Hearing aids: quality of life and socio-economic aspects. Hippokratia 11:183–186.
    OpenUrlPubMed
  69. ↵
    1. Wendt D,
    2. Hietkamp RK,
    3. Lunner T
    (2017) Impact of noise and noise reduction on processing effort: a pupillometry study. Ear Hear 38:690–700. https://doi.org/10.1097/AUD.0000000000000454
    OpenUrlPubMed
  70. ↵
    1. Wilroth J,
    2. Alickovic E,
    3. Skoglund MA,
    4. Enqvist M
    (2024) Nonlinearity detection and compensation for eeg-based speech tracking. In: ICASSP 2024-2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 1811–1815. IEEE.

Synthesis

Reviewing Editor: Nicholas J. Priebe, The University of Texas at Austin

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: NONE.

SYNTHESIS

Both reviewers agree that the authors have improved the manuscript substantially, including making positive changes to the manuscript structure, improvements to the figure legends, and the inclusion of additional behavioral data. Three minor issues were raised in review.

Issue 1

The behavioral data raises the possibility that the neural data could be used to predict behavioral responses on a trial-by-trial basis? Is this possible? Such inclusion is not necessary in the present manuscript but it is likely that other readers will comment on this and therefore it may be something that could be mentioned in the discussion.

Issue 2

In the context of load theory, the manuscript discusses that the stronger representation of the masker is due to reduced cognitive load, assuming that the brain has more capacity to process the masker. While this interpretation is consistent with Lavie's load theory, which has been predominantly investigated in the visual domain, there might be additional considerations specific to the auditory domain. Extending the load argument further, would this imply that the stronger the masker is represented, the better?

The problem with this conclusion is that, in attentional terms, one might expect that a weaker representation of the masker would be a better indicator of successful attention or suppression of the distractor. If you split the dataset into two groups based on the median strength of the masker representation under noise reduction conditions, would you expect better performance for stronger masker representations?

Perhaps clarification in the discussion would address this issue.

Issue 3

As you refer to the phonological character of the masker and target, it might be worthwhile to include and briefly discuss the recent paper by Carta et al., Cortical encoding of phonetic onsets of both attended and ignored speech in hearing impaired individuals (PLOS ONE, November 2024). This paper explores cortical encoding of phonetic onsets in both attended and ignored speech, which could provide additional context for your findings.

Back to top

In this issue

eneuro: 12 (2)
eNeuro
Vol. 12, Issue 2
February 2025
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Improving Tracking of Selective Attention in Hearing Aid Users: The Role of Noise Reduction and Nonlinearity Compensation
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Improving Tracking of Selective Attention in Hearing Aid Users: The Role of Noise Reduction and Nonlinearity Compensation
Johanna Wilroth, Emina Alickovic, Martin A. Skoglund, Carine Signoret, Jerker Rönnberg, Martin Enqvist
eNeuro 29 January 2025, 12 (2) ENEURO.0275-24.2025; DOI: 10.1523/ENEURO.0275-24.2025

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Improving Tracking of Selective Attention in Hearing Aid Users: The Role of Noise Reduction and Nonlinearity Compensation
Johanna Wilroth, Emina Alickovic, Martin A. Skoglund, Carine Signoret, Jerker Rönnberg, Martin Enqvist
eNeuro 29 January 2025, 12 (2) ENEURO.0275-24.2025; DOI: 10.1523/ENEURO.0275-24.2025
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Visual Overview
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • EEG
  • hearing aids
  • noise
  • noise reduction algorithms
  • nonlinearity compensation
  • temporal response functions

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Novel roles for the GPI-anchor cleaving enzyme, GDE2, in hippocampal synaptic morphology and function
  • Upright posture: a singular condition stabilizing sensorimotor coordination
  • EEG Signatures of Auditory Distraction: Neural Responses to Spectral Novelty in Real-World Soundscapes
Show more Research Article: New Research

Novel Tools and Methods

  • CalTrig: A GUI-based Machine Learning Approach for Decoding Neuronal Calcium Transients in Freely Moving Rodents
  • Automatic OptoDrive for Extracellular Recordings and Optogenetic Stimulation in Freely Moving Mice
  • An Open-Source and Highly Adaptable Rodent Limited Bedding and Nesting Apparatus for Chronic Early Life Stress
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.