Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: New Research, Cognition and Behavior

θ-Band Cortical Tracking of the Speech Envelope Shows the Linear Phase Property

Jiajie Zou, Chuan Xu, Cheng Luo, Peiqing Jin, Jiaxin Gao, Jingqi Li, Jian Gao, Nai Ding and Benyan Luo
eNeuro 11 August 2021, 8 (4) ENEURO.0058-21.2021; https://doi.org/10.1523/ENEURO.0058-21.2021
Jiajie Zou
1Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
2Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou 311121, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Chuan Xu
3Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310003, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cheng Luo
1Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
2Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou 311121, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Peiqing Jin
1Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
2Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou 311121, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jiaxin Gao
1Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jingqi Li
4Department of Rehabilitation, Hangzhou Mingzhou Brain Rehabilitation Hospital, Hangzhou 311215, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jian Gao
4Department of Rehabilitation, Hangzhou Mingzhou Brain Rehabilitation Hospital, Hangzhou 311215, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Nai Ding
1Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang University, Hangzhou 310027, China
2Research Center for Advanced Artificial Intelligence Theory, Zhejiang Lab, Hangzhou 311121, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Benyan Luo
3Department of Neurology, First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310003, China
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

When listening to speech, low-frequency cortical activity tracks the speech envelope. It remains controversial, however, whether such envelope-tracking neural activity reflects entrainment of neural oscillations or superposition of transient responses evoked by sound features. Recently, it is suggested that the phase of envelope-tracking activity can potentially distinguish entrained oscillations and evoked responses. Here, we analyze the phase of envelope-tracking in humans during passive listening, and observe that the phase lag between cortical activity and speech envelope tends to change linearly across frequency in the θ band (4–8 Hz), suggesting that the θ-band envelope-tracking activity can be readily modeled by evoked responses.

  • neural entrainment
  • phase resetting
  • EEG

Significance Statement

During speech listening, cortical activity tracks the speech envelope, which is a critical cue for speech recognition. It is debated, however, what is the neural mechanism generating the envelope-tracking responses. Previous work has shown that δ-band envelope tracking responses recorded during music listening cannot be explained by a simple linear-system model. Here, however, we demonstrate that θ-band envelope tracking responses recorded during speech listening shows the linear phase property, which can be well explained by a linear-system model.

Introduction

The speech envelope, i.e., temporal modulations below 20 Hz, is critical for speech recognition (Drullman et al., 1994; Shannon et al., 1995; Shamma, 2001; Elliott and Theunissen, 2009; Ding et al., 2017), and large-scale cortical activity measured by MEG and EEG can track the speech envelope (Lalor et al., 2009; Ding and Simon, 2012b; Wang et al., 2012; Peelle et al., 2013; Doelling et al., 2014; Harding et al., 2019). Since the slow temporal modulations in speech are highly related to the ∼5-Hz syllabic rhythm in speech, it has been hypothesized that θ-band neural synchronization to temporal modulations in speech provides a plausible mechanism to segment continuous speech into the perceptual units of syllables (Giraud and Poeppel, 2012; Poeppel and Assaneo, 2020).

Although low-frequency neural synchronization to slow temporal modulations has been extensively studied and is hypothesized to play a critical role in auditory perception, there is considerable debate about how it is generated (Ding and Simon, 2013; Doelling et al., 2014; Ding et al., 2016b; Haegens and Zion Golumbic, 2018; Zoefel et al., 2018; Alexandrou et al., 2020). On the one hand, it has been hypothesized that the low-frequency neural response to speech is generated by resetting the phase of intrinsic neural oscillations (Kayser et al., 2008; Lakatos et al., 2008, 2009; Schroeder et al., 2008; Kayser, 2009). On the other hand, it has been hypothesized that it is a sequence of transient responses evoked by sound features in speech (Lalor et al., 2009; Ding and Simon, 2012a,b). Distinguishing these two hypotheses, however, turns out to be extremely hard. For example, early studies have shown that the phase but not power of θ-band cortical activity is synchronized to speech (Luo and Poeppel, 2007; Howard and Poeppel, 2010), which supports the phase resetting hypothesis. It has been argued, however, that the same phenomenon can be observed for evoked responses, attributable to the different statistical sensitivity of response phase and power (Ding and Simon, 2013; Shah et al., 2004; Yeung et al., 2004). Furthermore, later studies observe consistent power and phase changes in the θ band during speech listening (Howard and Poeppel, 2012).

The phase resetting hypothesis and the evoked response hypothesis motivate different computational models for the neural responses to speech. Based on the evoked response hypothesis, the speech response can be simulated based on a linear time-invariant system, in which the phase lag between stimulus and response dramatically varies across frequency. A recent study, however, shows that neural synchronization to music violates the phase lag property predicted by the evoked response model, when listeners perform a pitch judgment task (Doelling et al., 2019). Instead, the response phase is more consistent with the prediction of a nonlinear oscillator model. This result suggests that cortical synchronization to music is potentially generated by more complicated nonlinear mechanisms than superposition of evoked responses. Since the evoked response model and the nonlinear oscillator model in Doelling et al. (2019) are computationally explicit, here we focus on these two models and test which model can better describe the neural response to speech.

The study by Doelling et al. (2019) questions the validity of using evoked response models to analyze neural activity synchronized to sound rhythms, since such models fail to predict the neural response phase during music listening. It remains possible, however, that the neural encoding scheme depends on the properties of sound. For example, the nonlinear oscillator models may be more appropriate for music, which is highly rhythmic, while the evoked response models may be sufficient to model the response to less rhythmic sound such as speech. It is also possible that the neural encoding scheme depends on the modulation frequency and appears to be different for music, which contains strong temporal modulations below 2 Hz, and speech, which contains strong temporal modulations around 5 Hz (Ding et al., 2017). Finally, it is also possible that active listening engages phase resetting mechanisms more than passive listening. Therefore, the primary goal of the current study is to quantify the phase lag property of the cortical response to speech during passive listening, and test whether it is more consistent with the prediction of the evoked response model or the nonlinear oscillator model in Doelling et al. (2019).

Materials and Methods

Participants

This study involved 15 healthy individuals (five males; 54.6 ± 10.12 years), who were right-handed with no history of neurologic diseases. Written informed consent was provided by participants.

Stimuli

Natural speech included two chapters from a novel, The Supernova Era by Cixin Liu (chapter 16, Fun country, and chapter 18, Sweet dream period). The story was narrated in Mandarin by a female speaker and digitized at 48 kHz. The speech was clear and highly intelligible. The two chapters were 34 min and 25 min in duration, respectively. Recordings of the two chapters were concatenated.

Procedures

All participants listened to speech while EEG responses were recorded. Speech was presented binaurally through headphones at a comfortable sound level. The experiment was separated into 2 d. On each day of the experiment, the spoken narrative was presented once. The 59-min speech stimulus was presented twice and therefore the total speech stimulus was almost 2 h, which was longer than the stimulus duration in most studies. The purpose of having the long stimulus was to reliably estimate the response phase. No other tasks were given, and therefore the participants listened passively.

EEG recording and signal preprocessing

EEG signals were recorded using a 64-electrodes BrainCap (Brain Products GmbH) in the international 10–20 system, and one of the 64 electrodes was placed under the right eye to record electrooculogram (EOG). EEG signals were referenced online to FCz, but were referenced offline to a common average reference (Poulsen et al., 2007). The EEG signals were filtered online with a 50-Hz notch filter to remove line noise (12th order zero-phase Butterworth filter), a low-pass antialiasing filter (70-Hz cutoff, eighth order zero-phase Butterworth filter), and a high-pass filter to prevent slow drifts (0.3-Hz cutoff, eighth order zero-phase Butterworth filter). The signals were sampled at 1 kHz. The EEG signal was processed following the procedure in Zou et al. (2019). All preprocessing and analysis in this study were conducted in the MATLAB software (The MathWorks).

EEG recordings were low-pass filtered below 50 Hz with a zero-phase anti-aliasing FIR filter (implemented using a 200-ms Kaiser window) and down-sampled to 100 Hz. EOG artifacts were regressed out based on the least-squares method. Similar to previous studies (Ding and Simon, 2012a,b), the speech response was averaged over the two representations on two recording days to increase the signal-to-noise ratio.

The envelopes of stimuli reflected how sound intensity fluctuated over time and were extracted by applying full-wave rectification to the stimulus. Similar to the preprocessing of EEG recordings, the envelopes were further low-pass filtered below 50 Hz with a zero-phase anti-aliasing FIR filter (implemented using a 200-ms Kaiser window) and down-sampled to 100 Hz.

Phase coherence analysis

To characterize the stimulus-response phase lag, the stimulus and response were both transformed into the frequency domain. Specifically, the acoustic envelope and EEG response were segmented into non-overlapping 2-s time bins, and all segments were converted into the frequency domain using the fast Fourier transform (FFT) algorithm. The response phase and stimulus phase were denoted as αft and βft, respectively, for frequency bin f and time bin t, and the stimulus-response phase lag was calculated as θft = αft − βft. The coherence of the phase lag across time bins, also known as the cerebro-acoustic phase coherence (Peelle et al., 2013), was calculated using the following equation: C(f)=(∑t=1Tcos(θft))2+(∑t=1Tsin(θft))2T, where C(f) was the phase coherence in frequency bin f, and T is the total number of time bins. The phase coherence was independently calculated for each electrode and then averaged using the arithmetic mean. The phase coherence is in the range of 0–1, and higher phase coherence indicated that the response phase was more precisely synchronized to the stimulus phase.

In the response topography analysis, we considered a signed phase coherence. Specifically, we chose channel Fz as a reference. For each electrode, if the phase difference between this electrode and electrode Fz was larger than 90°, the phase coherence was negated. Otherwise, the phase coherence was kept positive. The signed phase coherence could illustrate the phase relationship between electrodes on top of showing the phase coherence. Since the phase coherence was strongest in central-frontal electrodes, fourteen centro-frontal electrodes, i.e., Fz, F1, F2, F3, F4, FC1, FC2, FC3, FC4, Cz, C1, C2, C3, and C4, were used to characterize the phase-frequency relationship.

Phase-frequency relationship

The stimulus-response phase lag at frequency f, i.e., θf, was calculated by averaging θft over electrodes and all 2-s time bins using the circular average (Fisher, 1993). The group delay is defined based on the first-order derivative of the stimulus-response phase lag over frequency, i.e., d(f) = (θ(f) – θ(f + Δf))/2πΔf, which reflects how quickly a change in the stimulus is reflected in the response (Oppenheim et al., 1997). To calculate the group delay, we unwrapped the phase lag, fitted the phase lag with a straight line, and divided the slope of the straight line by 2π.

To evaluate the linearity of the phase-frequency curve, we defined a linearity measure as L(f) = 1/|θ(f) + θ(f + 2Δf) − 2θ(f + Δf)|. This measure was the reciprocal of the absolute value of the second-order derivative of the phase-frequency curve and different electrodes was pooled with averaging. If phase lag linearly changed with frequency, the second-order derivative was 0 and the linearity measure was positive infinity. A large d2(f) indicated a roughly linear phase-frequency curve.

Since the phase-frequency curve was approximately linear in the θ band, we fitted the actual phase-frequency curve in this frequency range using a linear function: θL(f) = kf + b, 4 ≤ f < 8. The slope parameter k and the intercept parameter b were fitted using the least-squares method, and the slope parameter k denoted the mean group delay between 4 and 8 Hz.

Statistics

Phase coherence

To evaluate whether the phase coherence at a frequency was significantly higher than chance, we estimated the chance-level phase coherence with a permutation strategy (Peelle et al., 2013; Harding et al., 2019). After the speech envelope and EEG response were segmented into 2-s time bins, we shuffled all time bins for the speech envelope so that the envelope and response were randomly paired. We calculated the phase coherence for the phase lag between the EEG response and randomly paired speech envelope. This procedure was repeated 5000 times, creating 5000 chance-level phase coherence. We averaged the phase coherence value over electrodes and participants, for both the actual phase coherence and the 5000 chance-level phase coherence. The significance level of the phase coherence at a frequency was (N + 1)/5001, if it was lower than N out of the 5000 chance-level coherence at that frequency (one-sided comparison).

Linearity

The chance-level linearity measure of the phase-frequency curve was estimated using a similar procedure. The linearity measure was significantly larger than chance with the significance level being (N + 1)/5001, if it was smaller than N of the 5000 chance-level values (one-sided comparison).

For the comparisons of the linearity measure of different frequency bands, statistical tests were performed using bias-corrected and accelerated bootstrap (Efron and Tibshirani, 1994). In the bootstrap procedure, the differences between two frequency bands were resampled with replacement 5000 times. Each time the data sampled were averaged across participants, therefore a total of 5000 mean values were produced. If N out of the 5000 mean values were greater (or smaller) than 0, the significance level was 2(N + 1)/5001 (two-sided comparison).

Correlation between phase coherence and phase linearity

For significance test of correlation between phase coherence and phase linearity, we used two-tailed Student’s t test. When multiple comparisons were performed, the p value was further adjusted using the false discovery rate (FDR) correction (Amini and Hochberg, 1995).

Model simulation

We simulated the neural response to speech using two models. One was the evoked response model, in which the simulated neural response was simply the speech envelope convolving the response evoked by a unit change in the speech envelope. This model was formulated as follows: r(t)=∫0Th(τ)A(t−τ)dτ, where r(t) and A(t) were the simulated neural response and the speech envelope, respectively. The h(t) described the neural response evoked by a unit change in the stimulus. In the illustration in Figure 1A, h(t) was a unit impulse function with 150-ms response latency, i.e., h(t) = δ(t − 150 ms), and in this case r(t) = A(t − 150 ms). The simulation results would not be affected as long as h(t) had a symmetric waveform centered at 150 ms.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Simulated phase-frequency curve. The curve shows the stimulus-response phase lag as a function of response frequency. Panels A, B separately show the results simulated based on the evoked response model and the nonlinear oscillator model proposed in Doelling et al. (2019). These two models are based on the evoked response hypothesis and the phase resetting hypothesis, respectively. In the current evoked response model, the response evoked by a unit change in stimulus is a delayed impulse and the function of the model is to delay the stimulus by 150 ms. Such a model predicts that the stimulus-response phase lag changes linearly across frequency. Consequently, the phase-frequency curve appears to have a sawtooth shape. The nonlinear oscillator model, in contrast, predicts that the stimulus-response phase lag only changes in a very limited range across frequency.

The other model was the nonlinear oscillator model proposed by Doelling et al. (2019). The oscillator model was formulated as follows: τdI(t)dt=−I(t)+S(ρI+bE(t)−dI(t)), where A(t) was the speech envelope. E(t) and I(t) simulated the responses from an excitatory and an inhibitory neural population, respectively. The output of this model was the difference between excitatory and inhibitory populations, i.e., E(t) − I(t). S denoted the sigmoid function. All the parameters were the same as in Doelling et al. (2019), i.e., a = b = c = 10, d = −2, ρE = 2.3, ρI = −3.2, κ = 1.5, and τ = 66 ms. Interpretations of the parameters could be found in Doelling et al. (2019). The model was simulated using the ode3 method in MATLAB Simulink (The MathWorks) and the time step was 1 ms.

In the simulations, the input was the envelope of the entire 59-min speech stimulus, and the input-output phase lag was calculated the same way it was calculated in the EEG analysis.

Results

We first analyzed in which frequency bands and EEG electrodes reliable cortical synchronization to speech was observed. The coherence of the stimulus-response phase lag was separately calculated in each frequency bin. Significant phase coherence was most reliably observed below 9 Hz. The topography of the low-frequency neural responses (<9 Hz) showed a centro-frontal distribution (Fig. 2A, upper right corner).

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Phase analysis of the envelope-tracking response. A, The phase coherence spectrum shows how precisely the response phase is phase locked to the stimulus. The dashed black line shows the 99% confidence interval of the chance-level phase coherence. The pink line on top denotes the frequency bins in which phase coherence is significantly higher than chance (p < 0.01, permutation test, FDR corrected). The topography shows the signed phase coherence averaged between 0.5 and 9 Hz. The dark dots denote the 14 centro-frontal electrodes selected for the further phase analysis. B, The phase-frequency curve. The phase lag appears to linearly decrease over frequency in the frequency band where the phase coherence was higher than chance. The black dotted lines are fitted based on the phase lag in the θ band. Each red dot denotes a participant.

We next analyzed how the stimulus-response phase lag varied across frequency. The phase lag appeared to change linearly over frequency in the frequency range where the phase coherence was higher than chance (Fig. 2B). We then evaluated the linearity of the phase-frequency curve (see Materials and Methods). As shown in Figure 3A, the linearity measure was significantly higher than chance in the low-frequency bands (p < 0.01, permutation test, FDR corrected) and peaked in the θ band. We compared the averaged phase linearity across δ (1–4 Hz), θ, α (8–13 Hz), β (13–30 Hz), and γ (30–45 Hz) bands. As shown in Figure 3B, the phase linearity was significantly higher in the θ band than other frequency bands (θ band vs δ band: p = 0.043; θ band vs α, β, or γ band: p = 4 × 10−4, bootstrap, FDR corrected). In order to estimate the group delay in the θ band, we used a straight line to fit the linear trend, which was shown by the dotted gray line in Figure 2B. The mean group delay in the θ band, i.e., slope of the linear fit in Figure 2B, was 156 ± 50 ms.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Linearity of the phase-frequency curve. A, Phase linearity as a function of frequency. The dashed black line shows the 99% confidence interval of the chance-level phase linearity. The pink lines on top denotes the frequency bins in which the phase linearity is significantly higher than chance (p < 0.01, permutation test, FDR corrected). Each red dot denotes a participant. B, The comparison of phase linearity across frequency bands. Error bars represent 1 standard error of the mean across participants. Significant differences between frequency bands are indicated by stars; *p < 0.05, ***p < 0.001 (bootstrap, FDR corrected). C, The relationship between phase coherence and phase linearity. The phase coherence and the phase linearity are both averaged within the θ band. Each red marker denotes a participant. Participants with higher phase coherence generally show better linearity (R = 0.805, p = 3 × 10−4, two-tailed Student’s t test).

Additionally, we investigated whether the phase-frequency curve tended to be more linear for participants who showed higher phase coherence. As the θ band shows significant and highest phase linearity (Fig. 3A,B), we compared the averaged phase linearity and the average phase coherence in the θ band (Fig. 3C). The phase linearity and the phase coherence were significantly correlated at the individual level (R = 0.805, p = 3 × 10−4, two-tailed Student’s t test).

Discussion

The current study investigates the phase property of the EEG response to speech during passive listening. It is shown that the stimulus-response phase lag is approximately a linear function of frequency in the θ band. This linear phase property can be easily explained by the evoked response model and therefore does not require more sophisticated nonlinear oscillator models.

Based on systems theory (Oppenheim et al., 1997), if the stimulus-response phase lag changes linearly across frequency, it indicates that the evoked response has a finite duration and has a symmetric waveform centered at the group delay (Fig. 1A for a delay system, for example). The current results suggest that the EEG response resembles the speech envelope but delayed, which supports the evoked response hypothesis. It is possible that cortical activity tracks the speech envelope or related features (Ding and Simon, 2014), and it is also possible that discrete acoustic landmarks that are extracted using nonlinear mechanisms drive the evoked responses (Doelling et al., 2014).

The group delay observed here is around 150 ms. Above 8 Hz, neural activity is not precisely synchronized to the speech envelope. Below 4 Hz, the phase linearity was weaker, suggesting more complex generation mechanisms. A previous MEG study finds similar group delay for the response to amplitude modulated tones: the group delay is 131 and 147 ms in the left and right hemispheres respectively, in the frequency range between 1.5 and 8 Hz (Wang et al., 2012). The 150- to 200-ms group delay also corresponds to the latency of the N1 and P2 responses in the temporal response function derived from the envelope tracking response (Aiken and Picton, 2008; Lalor et al., 2009; Ding and Simon, 2012a; Horton et al., 2013).

The current study finds that the stimulus-response phase lag changes approximately linearly across frequency (Figs. 2B, 3A), and participants who have higher stimulus-response phase coherence generally showed the better phase linearity (Fig. 3B). A previous MEG study, however, has shown that the stimulus-response phase lag cannot be explained by simple evoked responses (Doelling et al., 2019) but is more consistent with the prediction of a nonlinear oscillator model (illustrated in Fig. 1B). These two studies, however, focus on the neural responses in different frequency bands and during different tasks. The study by Doelling et al. (2019) analyzes cortical activity synchronized to auditory rhythms at 0.5, 0.7, 1, 1.5, 5, and 8 Hz. Four of the 6 frequencies considered in the study are below the θ band, and the current study also finds that the stimulus-response relationship is complicated below the θ band. Therefore, the results from these two studies do not conflict but reveal different neural mechanisms in different frequency ranges.

During speech processing, the neural response below the θ band can encode higher-level linguistic structures, e.g., phrases and sentences, on top of slow acoustic modulations, even if these linguistic structures are mentally constructed based on syntactic rules instead of prosodic information (Ding et al., 2016a). These results suggest that multiple factors could drive very low-frequency neural synchronization to speech. The analysis in this study characterizes neural synchronization to the speech envelope and cannot capture purely syntactic-driven response components. In other words, the neural response shown here is the response to acoustic modulations in speech, instead of the response to linguistic structures. The slow acoustic modulations below the θ band, however, could serve as a prosodic cue for mental construction of phrasal-level linguistic structures (Frazier et al., 2006). It is possible that distinct mechanisms are employed to encode syllabic-level and higher-level speech information: a roughly linear code is employed to encode syllabic-level speech features while more complex neural mechanisms are employed to prosodic features, which allows for frequent interactions with the syntactic and semantic processing systems.

In sum, by analyzing the stimulus-response phase lag, we show that the speech response in the θ band was approximately a delayed version of the speech envelope in the same frequency range. A time-delay system can be readily implemented using a linear time-invariant system, which is consistent with the evoked response hypothesis. Future studies, however, are needed to study whether the response phase property is modulated by attention and whether similar results could be obtained when listening to other sound.

Acknowledgments

Acknowledgements: We thank Yuhan Lu for thoughtful comments on a previous version of this manuscript.

Footnotes

  • The authors declare no competing financial interests.

  • This work was supported by the National Natural Science Foundation of China Grant 31771248 (to N.D.), the National Natural Science Foundation of China Grant 81870817(to B.L.), the National Key R&D Program of China Grant 2019YFC0118200 (to N.D.), the Major Scientific Research Project of Zhejiang Lab Grant 2019KB0AC02 (to N.D.), the Zhejiang Provincial Natural Science Foundation of China Grant LY20C090008 (to P.J.), and the Ministry of Education Frontier Science Center for Brain Science and Brain-Machine Integration, Zhejiang University.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Aiken SJ, Picton TW (2008) Human cortical responses to the speech envelope. Ear Hear 29:139–157. doi:10.1097/aud.0b013e31816453dc pmid:18595182
    OpenUrlCrossRefPubMed
  2. ↵
    Alexandrou AM, Saarinen T, Kujala J, Salmelin R (2020) Cortical entrainment: what we can learn from studying naturalistic speech perception. Lang Cogn Neurosci 35:681–693.
    OpenUrl
  3. ↵
    Amini Y, Hochberg Y (1995) A practical and powerful approach to multiple testing. J Roy Stat Soc B 57:289–300.
    OpenUrl
  4. ↵
    Ding N, Simon JZ (2012a) Emergence of neural encoding of auditory objects while listening to competing speakers. Proc Natl Acad Sci USA 109:11854–11859. doi:10.1073/pnas.1205381109 pmid:22753470
    OpenUrlAbstract/FREE Full Text
  5. ↵
    Ding N, Simon JZ (2012b) Neural coding of continuous speech in auditory cortex during monaural and dichotic listening. J Neurophysiol 107:78–89. doi:10.1152/jn.00297.2011 pmid:21975452
    OpenUrlCrossRefPubMed
  6. ↵
    Ding N, Simon JZ (2013) Power and phase properties of oscillatory neural responses in the presence of background activity. J Comput Neurosci 34:337–343. doi:10.1007/s10827-012-0424-6 pmid:23007172
    OpenUrlCrossRefPubMed
  7. ↵
    Ding N, Simon JZ (2014) Cortical entrainment to continuous speech: functional roles and interpretations. Front Hum Neurosci 8:311. doi:10.3389/fnhum.2014.00311 pmid:24904354
    OpenUrlCrossRefPubMed
  8. ↵
    Ding N, Melloni L, Zhang H, Tian X, Poeppel D (2016a) Cortical tracking of hierarchical linguistic structures in connected speech. Nat Neurosci 19:158–164. doi:10.1038/nn.4186 pmid:26642090
    OpenUrlCrossRefPubMed
  9. ↵
    Ding N, Simon JZ, Shamma SA, David SV (2016b) Encoding of natural sounds by variance of the cortical local field potential. J Neurophysiol 115:2389–2398. doi:10.1152/jn.00652.2015
    OpenUrlCrossRefPubMed
  10. ↵
    Ding N, Patel AD, Chen L, Butler H, Luo C, Poeppel D (2017) Temporal modulations in speech and music. Neurosci Biobehav Rev 81:181–187. doi:10.1016/j.neubiorev.2017.02.011 pmid:28212857
    OpenUrlCrossRefPubMed
  11. ↵
    Doelling KB, Arnal LH, Ghitza O, Poeppel D (2014) Acoustic landmarks drive delta-theta oscillations to enable speech comprehension by facilitating perceptual parsing. Neuroimage 85:761–768. doi:10.1016/j.neuroimage.2013.06.035
    OpenUrlCrossRefPubMed
  12. ↵
    Doelling KB, Assaneo MF, Bevilacqua D, Pesaran B, Poeppel D (2019) An oscillator model better predicts cortical entrainment to music. Proc Natl Acad Sci USA 116:10113–10121. doi:10.1073/pnas.1816414116 pmid:31019082
    OpenUrlAbstract/FREE Full Text
  13. ↵
    Drullman R, Festen JM, Plomp R (1994) Effect of temporal envelope smearing on speech reception. J Acoust Soc Am 95:1053–1064. doi:10.1121/1.408467 pmid:8132899
    OpenUrlCrossRefPubMed
  14. ↵
    Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. London: Chapman and Hall/CRC.
  15. ↵
    Elliott TM, Theunissen FE (2009) The modulation transfer function for speech intelligibility. PLoS Comput Biol 5:e1000302. doi:10.1371/journal.pcbi.1000302
    OpenUrlCrossRefPubMed
  16. ↵
    Fisher NI (1993) Statistical analysis of circular data. Cambridge: Cambridge University Press.
  17. ↵
    Frazier L, Carlson K, Clifton C Jr. (2006) Prosodic phrasing is central to language comprehension. Trends Cogn Sci 10:244–249. doi:10.1016/j.tics.2006.04.002 pmid:16651019
    OpenUrlCrossRefPubMed
  18. ↵
    Giraud AL, Poeppel D (2012) Speech perception from a neurophysiological perspective. In: The human auditory cortex. New York: Springer.
  19. ↵
    Haegens S, Zion Golumbic E (2018) Rhythmic facilitation of sensory processing: a critical review. Neurosci Biobehav Rev 86:150–165. doi:10.1016/j.neubiorev.2017.12.002 pmid:29223770
    OpenUrlCrossRefPubMed
  20. ↵
    Harding EE, Sammler D, Henry MJ, Large EW, Kotz SA (2019) Cortical tracking of rhythm in music and speech. Neuroimage 185:96–101. doi:10.1016/j.neuroimage.2018.10.037 pmid:30336253
    OpenUrlCrossRefPubMed
  21. ↵
    Horton C, D’Zmura M, Srinivasan R (2013) Suppression of competing speech through entrainment of cortical oscillations. J Neurophysiol 109:3082–3093. doi:10.1152/jn.01026.2012 pmid:23515789
    OpenUrlCrossRefPubMed
  22. ↵
    Howard MF, Poeppel D (2010) Discrimination of speech stimuli based on neuronal response phase patterns depends on acoustics but not comprehension. J Neurophysiol 104:2500–2511. doi:10.1152/jn.00251.2010
    OpenUrlCrossRefPubMed
  23. ↵
    Howard MF, Poeppel D (2012) The neuromagnetic response to spoken sentences: co-modulation of theta band amplitude and phase. Neuroimage 60:2118–2127. doi:10.1016/j.neuroimage.2012.02.028 pmid:22374481
    OpenUrlCrossRefPubMed
  24. ↵
    Kayser C (2009) Phase resetting as a mechanism for supramodal attentional control. Neuron 64:300–302. doi:10.1016/j.neuron.2009.10.022 pmid:19914178
    OpenUrlCrossRefPubMed
  25. ↵
    Kayser C, Petkov CI, Logothetis NK (2008) Visual modulation of neurons in auditory cortex. Cereb Cortex 18:1560–1574. doi:10.1093/cercor/bhm187 pmid:18180245
    OpenUrlCrossRefPubMed
  26. ↵
    Lakatos P, Karmos G, Mehta AD, Ulbert I, Schroeder CE (2008) Entrainment of neuronal oscillations as a mechanism of attentional selection. Science 320:110–113. doi:10.1126/science.1154735 pmid:18388295
    OpenUrlAbstract/FREE Full Text
  27. ↵
    Lakatos P, O’Connell MN, Barczak A, Mills A, Javitt DC, Schroeder CE (2009) The leading sense: supramodal control of neurophysiological context by attention. Neuron 64:419–430. doi:10.1016/j.neuron.2009.10.014 pmid:19914189
    OpenUrlCrossRefPubMed
  28. ↵
    Lalor EC, Power AJ, Reilly RB, Foxe JJ (2009) Resolving precise temporal processing properties of the auditory system using continuous stimuli. J Neurophysiol 102:349–359. doi:10.1152/jn.90896.2008 pmid:19439675
    OpenUrlCrossRefPubMed
  29. ↵
    Luo H, Poeppel D (2007) Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron 54:1001–1010. doi:10.1016/j.neuron.2007.06.004 pmid:17582338
    OpenUrlCrossRefPubMed
  30. ↵
    Oppenheim AV, Willsky AS, Nawab SH (1997) Signals and systems. Hoboken: Prentice Hall.
  31. ↵
    Peelle JE, Gross J, Davis MH (2013) Phase-locked responses to speech in human auditory cortex are enhanced during comprehension. Cereb Cortex 23:1378–1387. doi:10.1093/cercor/bhs118 pmid:22610394
    OpenUrlCrossRefPubMed
  32. ↵
    Poeppel D, Assaneo MF (2020) Speech rhythms and their neural foundations. Nat Rev Neurosci 21:322–334. doi:10.1038/s41583-020-0304-4 pmid:32376899
    OpenUrlCrossRefPubMed
  33. ↵
    Poulsen C, Picton TW, Paus T (2007) Age-related changes in transient and oscillatory brain responses to auditory stimulation in healthy adults 19-45 years old. Cereb Cortex 17:1454–1467. doi:10.1093/cercor/bhl056 pmid:16916887
    OpenUrlCrossRefPubMed
  34. ↵
    Schroeder CE, Lakatos P, Kajikawa Y, Partan S, Puce A (2008) Neuronal oscillations and visual amplification of speech. Trends Cogn Sci 12:106–113. doi:10.1016/j.tics.2008.01.002 pmid:18280772
    OpenUrlCrossRefPubMed
  35. ↵
    Shah AS, Bressler SL, Knuth KH, Ding M, Mehta AD, Ulbert I, Schroeder CE (2004) Neural dynamics and the fundamental mechanisms of event-related brain potentials. Cereb Cortex 14:476–483. doi:10.1093/cercor/bhh009 pmid:15054063
    OpenUrlCrossRefPubMed
  36. ↵
    Shamma S (2001) On the role of space and time in auditory processing. Trends Cogn Sci 5:340–348. doi:10.1016/S1364-6613(00)01704-6 pmid:11477003
    OpenUrlCrossRefPubMed
  37. ↵
    Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M (1995) Speech recognition with primarily temporal cues. Science 270:303–304. doi:10.1126/science.270.5234.303 pmid:7569981
    OpenUrlAbstract/FREE Full Text
  38. ↵
    Wang Y, Ding N, Ahmar N, Xiang J, Poeppel D, Simon JZ (2012) Sensitivity to temporal modulation rate and spectral bandwidth in the human auditory system: MEG evidence. J Neurophysiol 107:2033–2041. doi:10.1152/jn.00310.2011 pmid:21975451
    OpenUrlCrossRefPubMed
  39. ↵
    Yeung N, Bogacz R, Holroyd CB, Cohen JD (2004) Detection of synchronized oscillations in the electroencephalogram: an evaluation of methods. Psychophysiology 41:822–832. doi:10.1111/j.1469-8986.2004.00239.x pmid:15563335
    OpenUrlCrossRefPubMed
  40. ↵
    Zoefel B, Ten Oever S, Sack AT (2018) The involvement of endogenous neural oscillations in the processing of rhythmic input: more than a regular repetition of evoked neural responses. Front Neurosci 12:95. doi:10.3389/fnins.2018.00095 pmid:29563860
    OpenUrlCrossRefPubMed
  41. ↵
    Zou J, Feng J, Xu T, Jin P, Luo C, Zhang J, Pan X, Chen F, Zheng J, Ding N (2019) Auditory and language contributions to neural encoding of speech features in noisy environments. Neuroimage 192:66–75.
    OpenUrl

Synthesis

Reviewing Editor: Christine Portfors, Washington State University

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Bernhard Ross, Alexander Billig.

The reviewers agree that the manuscript provides an important contribution for understanding cortical tracking mechanisms for speech. The methods and analyses are appropriate, although there are some technical details that could be better explained in the manuscript. The results are straight forward and well described. In general, improving readability of the manuscript and generalization of the results would be helpful.

Reviewer 1:

Several studies reported an association between the envelope of speech sound and low-frequency brain responses recorded with EEG or MEG while participants listened to ongoing speech. The significance of such observations and the underlying neural mechanisms are matter of current research. The authors investigated the phase relationship between the stimulus speech envelope and the EEG signal. Their hypothesis was that a linear system would be characterized by a constant time delay between sound envelope and EEG, equivalent to a linearly increasing phase difference with increasing frequency. In contrast, the case of a non-linear oscillatory system would predict a constant phase difference over a certain frequency region. Participants in the study listened to a total of two hours of ongoing speech. The main results was a significant phase locking between the speech envelope and the EEG signal in the 3 Hz to 8 Hz frequency range and a constant group delay within this frequency band. Based on these findings, the authors concluded a linear system underlying the generation of brain rhythms following the speech envelope rather than a system of non-linear oscillators.

The research question is relevant and timely. The authors developed a clearly defined hypothesis. They conducted an experiment that potentially could support the hypothesis. The experimental work and the data analysis was performed according to the state of art in the area of research. The manuscript is organized according to the requirements of the journal and is well written. The discussion is based on the authors findings.

The innovative highlight of the study is the analysis of very long EEG time series which allowed to obtain a high signal to noise ratio for the phase measures.

The study design, the description of the methods, and the report of the results are pretty straightforward. However, the manuscript contains redundancies and could be shortened substantially, which would improve readability and access to the manuscript.

There are a number of details, mostly of technical nature, which should be addressed to improve the manuscript. The details are outlined in order of appearance in the manuscript:

Details:

Pg 3, 2nd paragraph: The authors introduce the linear time-invariant (LTI) system and a reference to the temporal response function (TRF) frame work. The latter is not necessary. The work by Ding and Simon (2012) and Lalor et al. (2009) is about de-convolution techniques to estimate a TRF. However, this has not been applied in the current study. The authors should stay with the LTI system, especially because concepts like the group delay are defined within the LTI framework.

Pg 3: the references to Figure 1 and the related discussion should be moved from the introduction section to the results or discussion section.

Pg 6: The last paragraph of section 2.4 and section 2.5 could be combined to emphasize that both the EEG signal and the speech envelope signal were treated in the same way by low-pass filtering and down sampling.

Pg 6, sect. 2.6: remove the redundancy ‘the Discrete Fourier Transform (DFT) implemented using’

Pg 6, sect. 2.6: revise the sentence starting with ‘If’ to something like: The response and stimulus phases were denoted as αft and βft in the frequency bin f and time bin t, and the stimulus-response phase lag θft was calculated as αft − βft.

Pg 7: The definition of the phase coherence C(f) is not consistent with the reference to Peelle et al. (2013), who used a coherence measure which was bound to the interval [0.1]. However, C(f) increases with the number of samples in time. You may want to divide the right side of the equation by N. You refer to this issue in the last paragraph of section 2.6. However, the definition of C(f) is still not consistent with the critical value for the Rayleigh test (approx. 3.0 for p=0.05). Please see page 70 Eqn 4.18 in the book by Fisher (1993), cited in section 2.7.

Pg 7: A set of 14 fronto-central electrodes was selected ‘to characterize’ ... Please be more specific how the signals from multiple electrodes were combined. Specifically was the mean of the positive definite coherence measure calculated (which constitutes a bias) or was the mean calculated in the somplex domain?

Pg 8: Section 2.8 is an example for highly redundant writing. E.g., ‘The phase coherence was always positive’ ... not necessary because that is how C(f) was defined. The whole section could be reduced to a single sentence that the signed coherence was defined by multiplying C(f) with the sign of the cosine part. However, a topographic map of the signed coherence has not been shown in the results section. Thus, the section 2.8 should be eliminated.

Pg 9. section 2.9.1 describes a nonparametric permutation test for the phase coherence. However, Section 2.6 explains that the parametric Rayleigh test had been used. Thus, the nonparametric test is not necessary and should be removed.

Pg 9, section 2.10 The authors introduce the definition of the LTI system. Again, for better clarity, the authors should stay within the LTI frame work and remove the references to TRF.

Pg 10: A main concern about modelling a non-linear oscillatory system is that it is not clear what is different and new compared to the cited study by Doelling et al. (2019) (besides that their study is about music).

Pg 11 1st paragraph: grammar: ‘were averaged across all electrodes’. Was this an average across all electrodes or the selected 14 fronto-central electrodes?

Pg 11, 1st paragraph: Remove redundancies: If selection of 14 fronto-temporal electrodes was a planned analysis, describe this in the methods section, it it was data driven, describe it in the results section, but not both.

Pg 11 and Figure 2A: What is the meaning of the y-axis scale: Where is the significance threshold? For what alpha level?

Pg 11 and Figure 2: Figure 2B illustrates that the phase lag could not be measured reliably above about 17 Hz. Thus, the x-axis scale could be limited to 0-17 Hz (or less), which would improve the illustration of the main effects. There is no group statistics available. Commonly, these type of data show noticeable between participant variability. Some descriptive statistics could document such variability. Moreover, the graphs in 2A - 2D should be completed with confidence intervals for the group means.

Pg 11 3rd paragraph: Remove redundancies: The definition of the group delay does not need to be repeated here. Instead, provide statistical justification for the claim that the group delay was consistent across participants between 3.5 Hz and 8 Hz.

Pg 12 2nd paragraph: The description of the LTI system is very simplistic and repeats that the characteristics of a constant group delay had been already introduced. Alternatively the authors could discuss limitations of a strictly linear model. It could be that the evoked response is triggered by a time discrete nonlinear process. For example, preprocessing in the auditory system could lead to identify a series of discrete events (e.g. associated with envelope transitions) which in turn elicit series (of partly overlapping) transient evoked responses .

Reviewer 2:

The authors analyze the relationship between phase and frequency in cortical envelope-tracking of natural speech. They look to distinguish between entrainment and transient response superposition as accounts for envelope-tracking. They find a relatively linear relationship between phase and frequency between 3.5 and 8 Hz, suggesting that evoked responses may account for tracking in this frequency range during passive listening to speech

The method and analyses seem appropriate, the manuscript is concise and generally clear. I believe the paper makes a useful contribution in advancing understanding of circumstances under which different tracking mechanisms are dominant. I have a couple of questions about the strength and generality of the linearity result.

The authors make the reasonable definition of linearity as the reciprocal of the absolute value of the second derivative. Under this measure, a perfectly linear phase-frequency relationship gives an infinite value of linearity. The plot in 2B - which presumably reflects an average over the group of participants - indeed looks close to linear in the relevant range, and the statistical testing reveals more linearity than expected by chance. However, I question how reasonable it is to summarise this result as “a linear relationship”. 2C suggests the peak linearity is below 1.5, suggesting that the mean absolute second derivative is above 2/3, rather than zero. Would the authors have described the relationship as linear regardless of the value, so long as the relationship was more linear than expected by chance? Is a safer claim that there is “some linearity” in the relationship, and that evoked responses therefore make a non-zero contribution to tracking in this frequency range? Perhaps the authors can also directly compare linearity in this frequency range to that in the lower and higher frequency ranges, to also argue that the relationship is signifcantly more linear in the theta range than the sub- and super-theta ranges?

The clustering of the individual participant dots in the frequency range of 2D were striking and helpful to the reader. To further understand the strength of the linearity result, it could be helpful to see plots of the phase-frequency relationship in different individuals. How similar do they look to what is presumably the group plot in 2B?

Minor English comments

Pg 1 Sig statement should read “the delta-band envelope-tracking response” or “delta-band envelope tracking responses"

Pg 2 Intro “i.e. show” does not make sense

Pg 3 should read “computationally explicit"

Pg 3 should read “validity of using"

Pg 6 should read “The envelopes of the stimuli were"

Pg 11 should read “results in Fig. 2A were"

Pg 11 should read “Based on systems theory” (twice)

Pg 13 should read “are below 3.5 Hz"

Pg 13 should read “but reveal"

Pg 13 should read “The analysis in this study characterizes"

Fig 1 “Since the phase lag ... “ sentence doesn’t make sense on its own alone

Back to top

In this issue

eneuro: 8 (4)
eNeuro
Vol. 8, Issue 4
July/August 2021
  • Table of Contents
  • Index by author
  • Ed Board (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
θ-Band Cortical Tracking of the Speech Envelope Shows the Linear Phase Property
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
θ-Band Cortical Tracking of the Speech Envelope Shows the Linear Phase Property
Jiajie Zou, Chuan Xu, Cheng Luo, Peiqing Jin, Jiaxin Gao, Jingqi Li, Jian Gao, Nai Ding, Benyan Luo
eNeuro 11 August 2021, 8 (4) ENEURO.0058-21.2021; DOI: 10.1523/ENEURO.0058-21.2021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
θ-Band Cortical Tracking of the Speech Envelope Shows the Linear Phase Property
Jiajie Zou, Chuan Xu, Cheng Luo, Peiqing Jin, Jiaxin Gao, Jingqi Li, Jian Gao, Nai Ding, Benyan Luo
eNeuro 11 August 2021, 8 (4) ENEURO.0058-21.2021; DOI: 10.1523/ENEURO.0058-21.2021
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • neural entrainment
  • phase resetting
  • EEG

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: New Research

  • Serotonergic signaling governs C. elegans sensory response to conflicting chemosensory stimuli.
  • EEG Signatures of Auditory Distraction: Neural Responses to Spectral Novelty in Real-World Soundscapes
  • Cerebellum involvement in visuo-vestibular interaction for the perception of gravitational direction: a repetitive transcranial magnetic stimulation study
Show more Research Article: New Research

Cognition and Behavior

  • EEG Signatures of Auditory Distraction: Neural Responses to Spectral Novelty in Real-World Soundscapes
  • The effects of mindfulness meditation on mechanisms of attentional control in young and older adults: a preregistered eye tracking study
  • Excess neonatal testosterone causes male-specific social and fear memory deficits in wild-type mice
Show more Cognition and Behavior

Subjects

  • Cognition and Behavior
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.