Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty

Anirudh Wodeyar, François A. Marshall, Catherine J. Chu, Uri T. Eden and Mark A. Kramer
eNeuro 13 October 2023, 10 (11) ENEURO.0507-22.2023; https://doi.org/10.1523/ENEURO.0507-22.2023
Anirudh Wodeyar
1Department of Mathematics & Statistics, Boston University, Boston, MA 02215
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Anirudh Wodeyar
François A. Marshall
1Department of Mathematics & Statistics, Boston University, Boston, MA 02215
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Catherine J. Chu
2Department of Neurology, Massachusetts General Hospital, Boston, MA 02215
3Harvard Medical School, Boston, MA 02114
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Catherine J. Chu
Uri T. Eden
1Department of Mathematics & Statistics, Boston University, Boston, MA 02215
4Center for Systems Neuroscience, Boston University, Boston, MA 02215
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mark A. Kramer
1Department of Mathematics & Statistics, Boston University, Boston, MA 02215
4Center for Systems Neuroscience, Boston University, Boston, MA 02215
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Abstract

Rhythms are a common feature of brain activity. Across different types of rhythms, the phase has been proposed to have functional consequences, thus requiring its accurate specification from noisy data. Phase is conventionally specified using techniques that presume a frequency band-limited rhythm. However, in practice, observed brain rhythms are typically nonsinusoidal and amplitude modulated. How these features impact methods to estimate phase remains unclear. To address this, we consider three phase estimation methods, each with different underlying assumptions about the rhythm. We apply these methods to rhythms simulated with different generative mechanisms and demonstrate inconsistency in phase estimates across the different methods. We propose two improvements to the practice of phase estimation: (1) estimating confidence in the phase estimate, and (2) examining the consistency of phase estimates between two (or more) methods.

  • EEG
  • Hilbert transform
  • LFP
  • phase
  • rhythm
  • state space model

Significance Statement

Rhythms in the brain can coordinate the activity of individual neurons and communication within brain networks, making rhythms a target for therapeutic interventions. Brain rhythms manifest in diverse ways, appearing with sinusoidal and nonsinusoidal waveforms, multiple peak frequencies, and variable durations. Across this diversity of rhythms, an important feature to characterize is the rhythm’s phase. In this manuscript, we demonstrate the ambiguity inherent in estimating phase from neural data. We propose estimating uncertainty in the phase and comparing multiple phase estimators improves phase estimation in practice.

Introduction

A rhythm in the local field potential (LFP), magneto/electroencephalogram (M/EEG), or intracranial EEG (iEEG) is a waveform repeating without overlap. A cycle of a rhythm is an expression of the waveform (Cole and Voytek, 2017). Once the waveform is defined the phase can be used as a local time index of a cycle relative to the waveform. The phase of rhythms may have a role to play in perception (Busch et al., 2009; VanRullen, 2016; Fiebelkorn et al., 2018; Helfrich et al., 2018; Herbst et al., 2022), communication across neural populations (Roberts et al., 2013; Fries, 2015; Wodeyar and Srinivasan, 2022), coordination within a neural population (Canolty and Knight, 2010) and coordination between local field potentials and neural spikes (Jarvis and Mitra, 2001; Ray et al., 2008; Lepage et al., 2011; Zanos et al., 2012; Pesaran et al., 2018). Phase has been of critical importance for therapeutic interventions as well (Widge and Miller, 2019), e.g., acoustic stimulation during sleep can induce slow waves (Ngo et al., 2013) and deep-brain stimulation can reduce Parkinsonian tremor (Cagnan et al., 2017).

In practice, phase is estimated with a Fourier analysis on tapered versions of the time series (Lepage et al., 2013), such as by using a Hilbert transform. In this approach, we assume that the time series consists of a stable rhythm accurately described as a harmonic (single frequency) or a narrowband waveform. A truly narrowband signal has spectral power concentrated in a tight frequency band. Any cycle will have an average rate of change of phase equal to the center frequency of the frequency band of maximum spectral power. This assumption determines the waveform and the phase. However, specifying phase for a cycle is difficult when the rhythm is inseparable from noise, the waveform has nonsinusoidal properties (Jones, 2016), or there is distortion of the waveform (such as from filtering a time series; Yael et al., 2018).

Recent evidence suggests that a narrowband model for neural rhythms may be inaccurate. For example, visual cortex γ band activity (30–90 Hz, cf. Buzsaki, 2006) is amplitude modulated and lacks narrowband periodicity (Burns et al., 2010; Xing et al., 2012; Spyropoulos et al., 2022). In this case, stochastically-driven damped harmonic oscillators may better model the γ activity. The discretized version of such a model is an autoregressive process of order two (abbreviated as AR(2); see Kramer, 2022, their Appendix A). Spyropoulos et al. (2022) show that an AR model fits the broadband power spectrum of the γ rhythm better than a narrowband model. AR models for both noninvasive (Franaszczuk and Blinowska, 1985) and invasive (Nozari et al., 2020) electroencephalogram recordings support a broadband representation for brain rhythms. In addition, several studies suggest that brain rhythms can consist of nonsinusoidal waveforms (Ouedraogo, 2016; Cole and Voytek, 2017; Bernardi et al., 2019), exhibit multiple nearby peak frequencies (Tort et al., 2010), and persist for short periods of time (Jones, 2016; van Ede et al., 2018). These observations suggest brain rhythms may be broadband, and in these cases narrowband models lead to phase estimates which do not capture what is intuitively sought by the practitioner.

Phase is relatively defined, and how we estimate phase depends on the interpretation sought by the practitioner. For any simulated time series without an explicit phase parameter, phase must be independently estimated, thus there is no “true phase.” In this manuscript, we consider how different methods define phase when each is applied to simulated signals generated from different stochastic processes, each time series lacking an explicit phase parameter. Through this analysis, we identify moments when phase estimates agree and disagree across methods and examine how each method performs in different simulation scenarios. We consider three distinct phase estimators: (1) the Hilbert transform of a narrowband filtered time series, (2) a state space model of rhythms, and (3) the Poincaré section. We apply each of these methods to three simulated time series: (1) a broadband rhythm in pink noise, (2) an AR(2) process, and (3) a Fitzhugh–Nagumo oscillator. Comparing the cross-method consistency in phase, we show how uncertainty metrics can be used to improve consistency; when any method shows confidence in the phase estimate, the different methods tend to agree. Finally, we illustrate example applications of these methods to estimate coherence and phase-amplitude coupling in in vivo data. We show how estimating uncertainty in the phase helps clarify the link between phase and other quantities of interest. Understanding the role of phase in neural dynamics requires us to be aware that phase is not unique and phase estimates may vary depending on the method employed.

Materials and Methods

Overview of phase estimation approaches

We apply three methods to estimate phase, each of which makes different assumptions for the rhythm model. The first method we consider consists of bandpass filtering the data and then estimating the phase via the Hilbert transform. While a common method in neuroscience applications (Canolty and Knight, 2010; Zhong et al., 2017; Navas-Olive et al., 2020; Petersen and Buzsáki, 2020), this approach is sensitive to noise and susceptible to waveform distortion. The second method we consider is a state space model with the state, representing the amplitude and phase, modeled by stochastically-driven oscillators. The state space model makes stronger assumptions of the data but is more resistant to model noise when the model class optimally explains the periodicity in the data. The final method we consider is the application of the Poincaré section, which is the most general method considered but is also the most sensitive to noise and user-defined choices. A complication is that different models which fit a brain rhythm equivalently well can still give phase estimates tracking distinct features of a rhythm. In what follows, we use this ambiguity when specifying phase to explore approaches to validate analyses involving phase.

Finite impulse response filter and Hilbert transform (FIR-Hilbert) method

Fourier-transform based analysis, a wavelet-transform, or a Hilbert transform employ essentially the same procedure (Bruns, 2004) to estimate the phase but with different tapering functions. Critically, all these approaches demarcate a set of frequencies (bandwidth of the narrowband) as relevant to capture the neural signal of interest and a set of frequencies as being irrelevant (for example, filtering to select the α band, defined as 8–13 Hz). This demarcation is intended to separate the signal and noise without relying on a parameter to explicitly model the noise contribution (Donoghue et al., 2020). Given the essential equivalence of the windowed-Fourier, wavelet, and Hilbert transforms (Bruns, 2004), we focus here on the Hilbert transform, i.e., we apply a filter to the data time series to select a frequency band of interest, apply the Hilbert transform, and derive the instantaneous phase (Chavez et al., 2006).

For an interpretable instantaneous phase, where the phase captures the waveform periodicity as opposed to amplitude modulation, the Hilbert transform (or a windowed-Fourier transform or a wavelet-transform) should only be applied when the spectral support (i.e., band for which spectral mass is non-zero) of the amplitude time series does not overlap with the spectral support of the phase time series (Boashash, 1991; Chavez et al., 2006). This is a consequence of the Bedrosian theorem (Bedrosian, 1963; for discussion, see Boashash, 1991) which shows: H(Acos(ϕ))=AH(cos(ϕ)), (1)where H is the Hilbert transform operator, A is the amplitude time series, and ϕ is the phase time series. The Bedrosian theorem holds when the effective spectral support of A, i.e., the frequencies with maximal spectral mass, are distinct from the effective spectral support of ϕ. Thus, any data time series with multiple rhythms or a rhythm with amplitude modulations may lead to phase estimates under the Hilbert transform that are difficult to interpret as the argument of a cosine function; see Chavez et al. (2006) for extended discussions of this theorem.

Another factor affecting the interpretability of the phase under the Hilbert transform is the parameter set used for the narrowband filter. A practitioner must decide the filter type, filter order, and window duration (for a discussion, see Widmann et al., 2015; for an analysis of different parameter choices, see Zrenner et al., 2020). The practitioner must also choose the filter’s bandwidth; any waveform whose spectral support is wider than the bandwidth of the filter will be altered in the filtered time series.

For all analyses here, unless otherwise stated, we apply a least-squares linear-phase FIR filter of order three times the longest cycle length expected (as in Cohen, 2008; Nadalin et al., 2019). After forward and backward filtering of the data (using the filtfilt function in MATLAB), we apply the Hilbert transform to the data to estimate the quadrature component of the signal. We compute the analytic signal using the hilbert function in MATLAB, which computes the quadrature component (the imaginary term) and combines it with the in-phase component (the real term). Finally, we estimate phase using the four-quadrant arctan function implemented by the angle function in MATLAB.

We estimate the 99% confidence interval for the FIR-Hilbert phase estimator, assuming stationary noise and complex-valued normality for the additive noise to the analytic signal, using Lepage et al., 2013 (Eq. 3.18): ϕt̂ ∼ N(ϕt,Bσ22|At|2) (2)where At is the time- t element of the amplitude time series, B is the bandwidth of the filter divided by sampling frequency, and σ2 is the additive noise variance, which is estimated using the residual after removing the filtered data from the unfiltered data. This confidence interval is derived by operationalizing the filter and Hilbert transform in terms of linear operators applied to the signal, and assuming for the analytic signal a simple “signal plus additive noise” model (see Lepage et al., 2013, Eqs. 3.2–3.11), where the noise is complex-Gaussian. From the inferred confidence bounds, asymptotic expressions for the maximum likelihood estimate of phase can be derived, assuming that the amplitude of the signal is significantly higher than the SD of the noise (for further discussion, see Lepage et al., 2013).

We note that the confidence interval for the phase estimate depends on the filter bandwidth (B): the wider the bandwidth, the greater the variance (or lower the confidence) in the phase estimate (Eq. 2). Similarly, a filter with center frequency misaligned with the center frequency of the true signal will increase variance in the phase estimate by increasing the additive noise variance (σ2 , the inferred variance of the data after removing the filter-applied signal) and reducing the estimated amplitude time series (At , the amplitude time-series of the filter-applied signal). In this case, increasing the filter bandwidth ( B) may reduce variance in the phase estimate by reducing the additive noise variance (σ2 ) and increasing the estimated amplitude time series (At ). Thus, characterizing the trade-off between filter bandwidth ( B), and the additive noise variance (σ2 ) and estimated amplitude time series (At ) depends on the specific signal of interest.

State space model

A state space modeling approach to specify phase (Matsuda and Komaki, 2017; Wodeyar et al., 2021; Soulat et al., 2022) expects the random time series to exhibit periodicity similar to that of a vector-valued AR(2) process. We assume that the analytic signal of a rhythm is a latent, unobserved process, that we can estimate from the observed data. We build this assumption into a state space structure, with the state equation (see Eq. 3 below) representing the latent analytic signal and the observation equation (see Eq. 5 below) representing the observed data. On receipt of each new sample of the observed data, we correct the estimate of the state using a Kalman filter, an optimal strategy when the state update equation is linear and Gaussian (see Schiff, 2012, Chapter 2).

To illustrate this approach, we consider the case of tracking a single rhythm whose central baseband frequency is 6 Hz. The state equation of the univariate analytic signal x for this case is: xt=aO(ωj)xt−1 + ut,ut∼N(0,Q), (3)where xt is a real-valued 2-D signal with real [Re(xt )] and imaginary [Imag(xt )] parts. The state equation yields a prediction for the current state (xt ) by rotating realized values of the state (xt−1 ) in the complex plane and adding the Gaussian innovation ut . The rotation matrix O(ω) is given by: O(ω)=(cos(ω)−sin(ω)sin(ω)cos(ω)),ω=2π6HzFs, (4)where Fs is the sampling frequency in Hertz. After rotation of each realized value of xt−1 , the resulting output is damped by a factor a∈[0,1), and then a realization of the Gaussian innovation (ut, with variance Q) is added. We estimate the observable at time t (yt ) as the real part of the state xt at time t in the presence of additive observation noise: yt=[1,0](xt) + νt, (5)where νt is the tth element of Gaussian IID noise [νt∼N(0,σ2 )].

When applying this method to identify brain rhythms and track the associated analytic signal, we need to estimate Q, a, σ2 , ω (assumed 6 Hz in illustration above). Estimating these parameters can be done by applying the expectation-maximization algorithm (Shumway and Stoffer, 1982), and depends on good initial estimates for the parameters (for potential initialization strategies, see Matsuda and Komaki, 2017; Soulat et al., 2022). We use the model described in Equations 3–5 under a Kalman filter framework, after estimating parameters, to obtain the optimal state estimates for each observed data sample. Further, we perform a backward smoothing operation (Shumway and Stoffer, 1982; Soulat et al., 2022), i.e., sending a time-reversed version of the data through the filter. Finally, after smoothing, for each data sample we derive the posterior distribution for the state prediction that we use to define the credible interval over the phase (Morey et al., 2016). We sample 10,000 times from the posterior distribution for the state and use this to extract 99% credible intervals (for details, see Wodeyar et al., 2021).

Poincaré section method

For any K-dimensional dynamical system and associated particle trajectory, we can define a Poincaré section as a K-1 dimensional plane which intersects the considered trajectory. We specify phase relative to the moment of plane crossing. For example, in a one-dimensional system (i.e., the activity varies along a single axis), negative to positive zero-crossings define a Poincaré section, where the origin is the ordinate through which we mark the particle trajectory. As described by Chavez et al. (2006) and others (Rosenblum et al., 1997; Kralemann et al., 2007), phase specified using a Poincaré section depends on the selection of observables (for example, the signal alone vs a time-delayed embedding; Takens, 1981), and how the section is defined. In this manuscript, we consider K = 2 and select zero-crossings to define the phase. From one zero-crossing (i.e., from the voltage traveling from negative to positive values) to the next, 2π radians are uniformly covered by the evolution of the phase. We denote tn and tn+1 as the times of the nth and n + 1th zero-crossings of the time-evolving particle. During [tn ,tn+1 ) the phase-space of the particle completes one full rotation. The phase ϕ is specified by a linear ramp on [tn ,tn+1 ) (Rosenblum et al., 1997): ϕ=2πt−tntn+1−tn,tn≤t<tn+1. (6)

For a pair of consecutive zero-crossing times, the Poincaré section assigns a phase of 0 at tn and 2π at the sample immediately before tn+1 . We note that, in the other two methods of phase specification, 0 and 2π are assigned to phase at the estimated amplitude maxima. To make the phase mapping isomorphic across methods, we unwrap the phase (unwrap function in MATLAB) assigned by the Poincaré section, add π/2, and wrap (wrapToPi in MATLAB) the phase to [−π,π]. This transformation shifts the phase by π/2 and remaps phase to [−π,π]. Unlike the other two methods considered here, the Poincaré section does not explicitly model a source of noise and thus lacks a measure of uncertainty in the phase estimate. Despite this and as we will see in Results, for nonsinusoidal rhythms, the Poincaré section serves as a useful comparison.

Signal-to-noise ratio

We define the signal-to-noise ratio (SNR) as the extent to which the power within a limited band of frequencies around the central frequency is greater than power at other frequencies. We control SNR in our simulations to exceed 2.5 for clear visual identifiability of a rhythm. To infer the SNR for an oscillation in the data with center frequency in F, we apply: SNR=PowerF1PowerF2, (7)where F1 is the frequency band of interest and F2 are frequencies outside the frequency band of interest. PowerF1 is the spectral power across the frequency band F1 as derived from a multitaper spectral estimator with nine tapers using the mtspectrumc function from Chronux (Bokil et al., 2010). In this manuscript, the summation in the numerator includes frequencies from 1 Hz below to 1 Hz above the central frequency, and the summation in the denominator includes frequencies 2 Hz below and 2 Hz above the central frequency in the denominator.

Estimating phase difference variability

To determine the consistency of phase across methods, we compute the circular SD (Zrenner et al., 2020). This method characterizes the width of the distribution of phase differences between a pair of phase estimators around a mean difference in the phase. To measure the circular SD of the phase differences between phases θM and θN for estimators M and N, we apply (Zrenner et al., 2020), CircSD=−2log|(1n∑l=1nexp(iθM,l−iθN,l))|. (8)

We note that this method of assessing phase across methods presumes that any bias (i.e., a phase difference centered at a value other than zero) is consistent across time (Zrenner et al., 2020). We interpret the CircSD as a measure in radians. We note that this interpretation is valid only when the spread of phase-difference angles is nonuniform; closer to a uniform distribution, this measure exceeds 2π . However, for the cases considered here, this situation does not occur.

Analysis of in vivo data

To illustrate application of the phase estimation methods, we consider an open-source dataset consisting of recordings from two invasive EEG electrodes (iEEG, 400 s sampled at 1000 Hz) in a nonhuman primate anesthetized with propofol (available at http://neurotycho.org/expdatalist/listview?task=75). Propofol anesthesia yields a brain state with synchronized brain rhythms (Lewis et al., 2012), resulting in strong cross-cortical coherence and phase-amplitude coupling relationships, phenomena proposed to coordinate neural spiking within and between brain regions (Jarvis and Mitra, 2001; Lepage et al., 2011; Fries, 2015; Pesaran et al., 2018). Here, we apply different phase estimation methods to analyze the coherence in the slow δ band [0.25,1.5 Hz] between the two recordings, and phase-amplitude coupling between the slow δ and β [13,30] Hz (defined from the classical range provided by Engel and Fries, 2010) bands within each recording (Ma et al., 2019).

To estimate the phase of slow δ activity via the FIR-Hilbert method, we apply an FIR filter to isolate the frequency band of interest (slow δ) and calculate the Hilbert transform to estimate the slow δ phase. We apply the same FIR-Hilbert approach to estimate the amplitude time series for the β band (Cohen, 2008; Nadalin et al., 2019). For the state space method, we first estimate model parameters as well as the central frequencies for each band (slow δ and β) using a subset of the data (10 s). We do so for each recording separately, and then use these parameters for state estimation across all 400 s of data (see Wodeyar et al., 2021, Equations 6–10). For the Poincaré method, we first broadband filter the iEEG between 0.1 and 50 Hz using a fourth order Butterworth filter. Filtering before applying the Poincaré method reduces the influence of higher frequency noise and slow drifts on zero-crossings. The Poincaré method does not yield an amplitude time series, so we assume the amplitude is constant and equal to 1. We note that, in this case, coherence values using the Poincaré method are akin to estimating a phase locking value (Lachaux et al., 1999).

We use phase estimates from each method to estimate the coherence of the slow δ [0.25,1.5 Hz] rhythm between the two recordings. Coherence is an estimate of the consistency in the phase and amplitude at one frequency (or in a single frequency band) between two electrophysiological time series (for a discussion see Pesaran et al., 2018). We compute coherence ( C) for two complex-valued row vectors over time E1 and E2 , each representing the amplitude and phase in one frequency band using the function corr in MATLAB.

Phase-amplitude coupling represents how the phase of a low frequency rhythm (here, the slow δ band) coordinates the amplitude of a high frequency rhythm (here, the β band; Hyafil et al., 2015). We note that we cannot apply the Poincaré method for phase-amplitude coupling estimation as this method does not estimate an amplitude time series. We therefore analyze the phase-amplitude coupling within each recording using only the FIR-Hilbert and state space methods. To visualize the phase-amplitude coupling, we divide the phase between 0 to 2π into 20 nonoverlapping intervals and plot the average amplitude for each interval of phase (Canolty and Knight, 2010; Tort et al., 2010).

Code accessibility

All code developed for and applied in this paper are available for reuse and further development at https://github.com/Eden-Kramer-Lab/UncertaintyInPhase. All code was run on a Dell XPS 15 7590 running Windows 10 and having 32 GB of RAM. A README file is included with the code that has a legend for all files.

Results

A pure sinusoid is the only function that has an explicitly specified phase parameter, but it is too restrictive a model for most brain rhythms (Cole and Voytek, 2017). For the simulated time series considered below, an optimal specification of phase does not exist. The generative models used to simulate the time series have no explicit phase parameter, thus, phase estimates must be derived as functions of the generative stochastic processes. This precludes a “true phase,” i.e., a single-phase parameter, and error metric. Because in practice only observations of a pure sinusoid have a true phase, we instead consider different methods to estimate phase, each imposing different assumptions about the data. We then compare the relative performance of these different methods. In what follows, we show that when any method indicates reduced uncertainty in the phase estimate, then all three methods tend to agree allowing practitioners to limit the phase estimates required to be considered.

We analyze simulation data from four generative models. First, we examine a case where the generative model has a phase parameter but is unlikely to be a good model of empirical data. Then, we consider three more empirically valid models: a nonparametric model, a simple stochastic parametric model, and a dynamical systems model. The first empirically valid generative model can fit the broadest range of neural data, while the second and third generative models are relevant for smaller subsets of data. Each generative model serves as a potential exemplar of a range of empirical data. For all generative models, we make three assumptions. First, we assume that a rhythm always exists in the data with high signal-to-noise ratio (SNR; with SNR ≥ 2.5 to allow for an identifiable rhythm, see Materials and Methods). Second, we assume that the data consists of a single rhythm of interest. Third, we assume that any temporal variability in the data is derived from the generative model.

Amplitude modulated sinusoid in pink noise

To consider the scenario of a well-defined, ground truth phase, we simulate a 6 Hz sinusoid with amplitude modulation at a rate of 0.56 Hz, such that every 1.8 s, the 6-Hz sinusoid emerges for 0.9 s (see spectrum to understand the signal-to-noise level in Fig. 1a and example trace in Fig. 1b). In this signal, a meaningful phase parameter exists only during times when the sinusoid is present. To this signal, we add pink noise (power proportional to 1/f1.5; Fig. 1a) with random starting phases for the sinusoid at each frequency bin. We analyze 1000 instantiations of these simulated data, each of duration 10 s. This signal represents the burst-like dynamics of rhythms in brain electrophysiological time series.

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

FIR-Hilbert and state space methods more accurately recover true phase for an amplitude modulated sinusoid in pink noise. a, Power spectrum of the background pink noise (red) and spectral peak at 6 Hz (blue) as well as harmonics from square-wave amplitude modulation. b, Top, Example trace of the 6 Hz amplitude modulated sinusoid (blue) and the summated data with pink noise (orange). Rectangles indicate periods with (green) and without (red) the sinusoid. Bottom, Phase estimates from the Poincaré method (black), FIR-Hilbert method (mean estimate blue, confidence intervals shaded blue), and state space method (mean estimate red, credible intervals shaded red). c, Distributions of differences in the estimated phase across 1000 simulations between each method and the true phase during times when the sinusoid is present. d, Confidence and credible interval distributions across 1000 simulations when the sinusoid is present and absent. e, Distributions of differences across 1000 simulations between methods when using time segments with or without the sinusoid. Estimates from all methods better align during segments of data when the sinusoid is present. In each box plot, the central mark indicates the median, and the bottom and top edges of the box indicate the 25th and 75th percentiles, respectively. The whiskers extend to the most extreme data points not considered outliers, and the outliers are plotted individually using symbols.

We estimate the phase of the simulated 6 Hz rhythm using the Poincaré method, the FIR-Hilbert method, and the state space method (examples in Fig. 1b). Visual inspection suggests that the latter two methods produce similar phase estimates when the sinusoid is present. However, during periods when the sinusoid is not present (Fig. 1b, red rectangular box), the FIR-Hilbert method computes phase estimates whose trajectory completes full cycles (continuous traversal from −π to π), while phase estimates under the state space method tend to concentrate near 0 or π. Visual inspection shows that the phase from the Poincaré method is affected by the sudden zero crossings, which arise from the appreciable spectral power at frequencies above 6 Hz. This leads to rapid evolution of the phase, suggesting cycles that may be undesirable when the goal is to track the phase of the 6 Hz sinusoid. These qualitative observations are confirmed by direct comparison of the phase estimates to the true phase when the 6 Hz sinusoid is present (Fig. 1c); the state space and FIR-Hilbert methods more accurately (median circular SD of 20° and 15°, respectively) track the true phase compared with the Poincaré method (median circular SD of 72°).

The state space and FIR-Hilbert methods both support the calculation of metrics of uncertainty in the phase estimate, credible intervals for the state space method and confidence intervals for the FIR-Hilbert method (see Materials and Methods). To compare these metrics of uncertainty, we calculate the distributions of the widths of the credible and confidence intervals for time points when the sinusoid is, and is not, present (Fig. 1d). We find that both measures of uncertainty unambiguously distinguish when a sinusoid is present (the uncertainty is low, FIR-Hilbert median confidence interval width of 10°; and state space method median credible interval width of 13°) versus when a sinusoid is absent (the uncertainty is high, FIR-Hilbert median confidence interval width of 54° and state space method median credible interval width of 51°).

Finally, as we lack knowledge of the true phase in real data and in more realistic simulations of neural activity, we compare the phase estimates between methods as a proxy to judge when multiple characteristics of the signal used to define phase are consistent. Comparisons between all three methods and their metrics of uncertainty reveal when meaningful estimates of phase can be calculated, i.e., when the sinusoid is present. For each pair of methods, we compute the difference between phases and construct the circular SD histogram of the phase estimate differences across time points when the sinusoid is present and when it is absent. We find that when the sinusoid is absent there is a greater cross-method difference in phase estimates, consistent with greater discrepancy in how the phase is defined. We conclude that for an amplitude modulated sinusoid the state space and FIR-Hilbert methods more accurately recover the true phase compared with the Poincaré method.

Broadband rhythm in pink noise

To match empirical observations in neuroscience (Miller et al., 2009a,b), we consider a phenomenological model of rhythms with increased power over a broad range of frequencies, emerging above 1/fα background activity (i.e., the background activity power decreases with frequency as 1/fα, where α∈[1,3] (Donoghue et al., 2020)). We simulate this type of data in the frequency domain as a complex-valued vector. To do so we add a Gaussian-function shaped broadband amplitude (center frequency 6 Hz, SD 1 Hz) with a zero phase to a background of pink noise (power proportional to 1/f1.5; Fig. 2a) with random phases at each frequency. The inverse discrete Fourier transform applied to this complex-valued vector yields a time series of 10 s in duration.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

The FIR-Hilbert and state space methods produce consistent results for a broadband signal in pink noise. a, Analytic power spectrum of the background pink noise (red) and broadband spectral peak at 6 Hz (blue). b, Top, Example trace of broadband signal with added pink noise. Bottom, Phase estimates from the Poincaré method (black), FIR-Hilbert method (mean estimate blue, confidence intervals shaded blue), and state space method (mean estimate red, credible intervals shaded red). All three methods show periods where they disagree (t ∈ 1.75 s, see orange rectangle) and agree (t ∼ 1.5 s, green rectangle). c, Distributions of differences in the estimated phase between each pair of methods using (upper) all example data or (lower) a subset of data with confident phase estimates. Estimates from all methods better align during segments of data with confident phase estimates. d, Distributions of differences between methods across 1000 simulations. Using samples of data with high confidence in the phase improves consistency between methods. For broadband rhythms, the state space and FIR-Hilbert methods are more similar than the Poincaré method. Box plots as in Figure 1.

We estimate the phase of the simulated 6 Hz rhythm using the Poincaré method, the FIR-Hilbert method, and the state space method (examples in Fig. 2b). Visual inspection suggests that the latter two methods produce similar phase estimates when the rhythm is visible; at these times the credible intervals of the state space method tend to overlap the confidence bounds of the FIR-Hilbert method (i.e., the two shaded curves tend to overlap in Fig. 2b, see t ≈ 1.5 s). During periods of low amplitude (t∈[1.75,2]s ), the FIR-Hilbert method computes phase estimates whose trajectory completes full cycles (continuous traversal from − π to π) while phase estimates under the state space method remain consistently near ±π . Visual inspection of the phase estimated by the Poincaré method reveals a different result than what is seen using the other methods; sudden zero crossings, which arise because of appreciable spectral power at high frequencies, lead to rapid phase-series cycles that may be undesirable if the goal is to track phase for fluctuations at lower frequencies. The distributions of phase differences between all pairs of methods support the similarity between the state space and FIR-Hilbert methods and their contrast with the Poincaré method (Fig. 2c); phase differences between the state space and FIR-Hilbert methods concentrate near 0, while phase differences with the Poincaré method appear at all angles. Repeating this analysis for 1000 realizations of the generating model and considering the variability of phase differences using circular SD (see Materials and Methods, Estimating phase difference variability), we find consistent results (Fig. 2d, left boxplots).

We infer moments of consistent phase using the confidence bounds determined from the FIR-Hilbert method and the credible intervals from the state space method. We designate time segments with both smaller confidence bounds and smaller credible intervals as “high confidence” time segments. These are times when the values are jointly in the bottom quartile (as empirically determined) for both uncertainty metrics. Across high confidence time segments, the phase difference between the FIR-Hilbert and state space methods shows a circular SD of 14.22 ± 0.07° (mean ± SEM) reduced from 30.02 ± 0.13° when considering all data (i.e., times with and without high confidence in the phase estimates). Similarly, the circular SD between the FIR-Hilbert/Poincaré (44.8 ± 0.3°) and state space/Poincaré (45.8 ± 0.3°) methods decreases in high confidence time segments, compared with all data (FIR-Hilbert/Poincaré: 69.5 ± 0.2°; state space/Poincaré: 68.6 ± 0.2°; Fig. 2d, right boxplots). However, we note that circular SD remains high when comparing the Poincaré method for both choices of time segments (Fig. 2d, red and pink boxplots).

We note that different filtering approaches may impact the FIR-Hilbert results. To investigate this, we also applied to these simulated data a variety of common filtering approaches (as in Zrenner et al., 2020): windowed sinc and least squares finite impulse response filters of second to fifth orders; Butterworth filters of orders 2, 4, and 6; Chebyshev Type I filters of orders 2, 4, and 6; and elliptical filters with 10- and 20-dB attenuations. We maintained a consistent passband across approaches to allow comparability and applied the Hilbert transform to estimate the phase. For each filtering approach, we then computed the phase difference with the primary FIR-Hilbert approach defined in Materials and Methods. Different filtering approaches with the same central frequency and bandwidth nonetheless produce different phase estimates, as expected (Fig. 3a). However, the circular SDs between estimates from different filtering approaches are much smaller (median is <20° in 15 out of 16 cases) than the deviation in phase estimates from the FIR-Hilbert method and either of the state space or Poincaré methods (Fig. 3bi). Because the filtering approaches focus on capturing a narrowband peak in the simulated data, we find that different filtering methods produce similar phase estimates, consistent with Sameni and Seraj (2017) and Zrenner et al. (2020). Finally, we examine the impact of the center filter frequency on the FIR-Hilbert method. To do so, we adjust the center filter frequency between 3 and 9 Hz, while keeping the bandwidth (4 Hz) and filter approach (finite impulse response) fixed. We find that, as the central frequency of the filter deviates from the correct central frequency, phase differences compared with the primary FIR-Hilbert approach rapidly increase (e.g., the phase difference between the 3 Hz filter and 6 Hz filter exceeds that between the FIR-Hilbert and Poincaré methods; Fig. 3ci). This difference is reduced (by ∼20°) when examining the subset of data with confident phase estimates (i.e., when the FIR-Hilbert confidence intervals are small; Fig. 3cii). We conclude that during periods of high confidence in the phase estimate (e.g., when the rhythm has higher amplitude, reducing the confidence interval width), the phase is consistently estimated (similar phase estimates across filters) even when the center filter frequency deviates from the true central frequency.

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Different filter parameters minimally change FIR-Hilbert derived phase estimates. a, Distributions of differences between Hilbert transform phase estimates with different filtering approaches (see legend) across 1000 simulations. The variability because of filtering approaches is less than the variability between methods. b, Distributions of differences between phase estimation methods (i) for all data and (ii) when there is high confidence in the phase estimate. c, Distributions of differences between Hilbert transform phase estimates with different central frequencies (±2 Hz bandwidth, see legend) for (i) all data, and (ii) only data with high confidence in the phase estimate. Variability of phase estimates at different central frequencies rapidly increases away from the true central frequency (6 Hz). Box plots as in Figure 1.

We conclude that, for a broadband rhythm in pink noise, the state space and FIR-Hilbert methods produce comparable results. High frequency fluctuations in the time series produce intervals of rapid zero-crossings in the Poincaré method. Intervals exist with increased agreement between all three methods; these intervals occur when the state space and FIR-Hilbert methods produce confident estimates (i.e., small confidence bounds/credible intervals).

Noise-driven damped harmonic oscillator

Auto-regressive (AR) processes of order 2 are linear time-series models capable of exhibiting periodicity (Shumway and Stoffer, 2017). AR(2) processes sustaining a rhythm demonstrate spontaneous variability in the amplitude of the rhythm or amplitude modulation (Spyropoulos et al., 2022) and a 1/fα proportional power spectrum. These features are consistent with features that characterize γ band rhythms, as inferred from in vivo observations (Burns et al., 2010; Xing et al., 2012; Spyropoulos et al., 2022). Unlike the previous simulation where the additive noise is characterized by a 1/fα power spectrum, here the Gaussian noise is passed through an AR(2) filter to yield spectra that has both a broadband peak centered at 6 Hz and a 1/fα power spectrum; see analytic power spectrum in Figure 4a and an example trace in Figure 4b.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

All three methods produce similar phase estimates for data generated by an AR(2) model. a, Analytic power spectrum of the AR(2) model. A broadband spectral peak occurs at 6 Hz. b, Top, Trace of example AR(2) signal. Bottom, Phase estimates from the Poincaré method (black), FIR-Hilbert method (mean estimate blue, confidence intervals shaded blue) and state space method (mean estimate red, credible intervals shaded red). Intervals occur in which the methods agree (t < 0.5 s, green rectangle) and disagree (t near 1.5 s, orange rectangle). c, Distributions of differences in the estimated phase between each pair of methods using (upper) all example data or (lower) a subset of data with confident phase estimates. Estimates from all methods better align during segments of data with confident phase estimates. d, Distributions of differences between methods across 1000 simulations. Using samples of data with high confidence in the phase improves consistency between methods. Box plots as in Figure 1.

Visual inspection of the estimated AR(2) phase reveals intervals of agreement and disagreement between the three methods (Fig. 4b) as a result of amplitude modulation. For example, when the rhythm has high amplitude (t < 0.5 s), the uncertainty reduces and the three phase estimates are qualitatively similar in the features (e.g., peaks, troughs, and zero-crossings) tracked by the phase. At other times the methods disagree about what constitutes a full phase cycle. For example, at t near 1.5 s, the observed time series exhibits two low amplitude troughs. Between these troughs, the state space model suggests no phase cycle occurs (staying near ±π ), the Poincaré method indicates a long phase cycle spanning beyond the two troughs, and the FIR-Hilbert method indicates a complete phase cycle between the two troughs. We find similar phase estimates across methods (Fig. 4c, top row), with a circular SD of 37.9 ± 0.1° (mean and SEM across comparisons of all three methods over 1000 realizations of the simulation; see Fig. 4d). Restricting analysis to high confidence time segments, we find increased consistency in phase estimates across methods (Fig. 4c, bottom row) and a nearly twofold reduction in the circular SDs between methods (22.9 ± 0.1°, 1000 realizations of the simulation; see Fig. 4d).

We conclude that, for an AR(2) simulated time series, all three methods produce comparable results. Compared with the simulation of a broadband rhythm in pink noise (Fig. 2), in the AR(2) simulated time series fewer high frequency fluctuations distort the phase estimates under the Poincaré method. Consistent with the previous simulation, when the state space method and FIR-Hilbert methods both produce confident estimates (i.e., small confidence bounds/credible intervals), the agreement across methods improves. Despite this general agreement between methods, an important difference exists; when the rhythm amplitude is small, the state space method produces nonevolving phase estimates (e.g., near ±π in Fig. 4b), while phase estimates continue to evolve for the other methods.

Fitzhugh–Nagumo oscillator

As a third example of a generative model, we consider the Fitzhugh–Nagumo oscillator, an approximation of Hodgkin-Huxley neuronal membrane potential dynamics (Izhikevich and FitzHugh, 2006). The model is defined by the following equations where (in normalized units) V denotes the voltage, and I denotes the input stimulation to the neural population (Hong, 2011): V˙=−10V3 + 11V2−V−W+I+ϵW˙=VI=0.1ωcos(ωt)ω=0.8168;ϵ ∼ N(0,3). (9)

We select parameter values that permit chaotic oscillations (as elicited at a “strange attractor”; Izhikevich, 2007), in which the voltage dynamics follow a trajectory consistently around a point in the coordinate space to yield a nonuniform waveform shape (relative to a sinusoid; Cole and Voytek, 2017; Fig. 5a,b). This rhythm is phenomenologically similar to a slow oscillation measured in the ferret visual cortex LFP (Fröhlich and McCormick, 2010, their Fig. 2A) and in hippocampal LFP in rats (see Ouedraogo et al., 2016, their visual abstract). After simulating data using the ode45 function in MATLAB, we rescale time such that the rhythm has an ∼1 Hz periodicity to match a slow oscillation timescale.

Figure 5.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 5.

Phase estimates for a rhythm generated by the Fitzhugh–Nagumo oscillator. a, Example trace of the coordinate space for the two variables comprising the Fitzhugh–Nagumo model. b, Top, Trace of example Fitzhugh–Nagumo oscillator signal. Bottom, Phase estimates from the Poincaré method (black), FIR-Hilbert method (mean estimate blue, confidence intervals shaded blue) and state space method (mean estimate red, credible intervals shaded red). Intervals occur in which the methods agree (t ∈ [6,7]s, green rectangle) and disagree (t ∈ [3, 4] s, orange rectangle). c, Differences in the estimated phase between each pair of methods. Upper row shows differences between methods across all data, and lower row shows differences only for time indices of high confidence. d, Distributions of differences in the estimated phase between each pair of methods across 1000 simulations using (left) all example data or (right) a subset of data with confident phase estimates. Box plots as in Figure 1.

In this generative model, we have access to all the variables underlying the data, i.e., the full system is known, yet we cannot define a singular phase; the definition of phase depends on the voltage events of interest. Visual inspection of the dynamics reveals waves (e.g., between 6 and 7 s in Fig. 5b, green rectangle) for which determining whether a full cycle occurs remains unclear. Two potential ways to define a waveform for this system are: (1) a repetition of the high amplitude wave (V>10 , see t near 3.5 s in Fig. 3b, orange rectangle), or (2) the dynamical system completes a full cycle in the phase space at any amplitude (i.e., including the small amplitude rotations in Fig. 5a, and t near 6.5 s in Fig. 5b); we note that waves included in (1) are a subset of waves included in (2).

During times of smaller amplitude waves, all methods infer a complete phase evolution, and the state space and FIR-Hilbert methods suggest increased uncertainty (wider credible/confidence intervals). During higher amplitude waves, the phase cycle estimated from the Poincaré method differs from that from the other two methods (Fig. 5b, shaded orange rectangle, black trace). The Poincaré method produces a longer phase cycle for high-amplitude waves but the same phase cycle for lower amplitude waves. Additionally, we note that the state space method has wider credible intervals at the start of each phase cycle (phase at −π ) for high-amplitude events. Comparing methods, we find more agreement between the FIR-Hilbert and state space methods (mean and SEM of the circular standard deviation across 1000 realizations of the simulation: 18.9±0.1 °) compared with the Poincaré method (FIR-Hilbert: 39.6±0.1 °; state space: 30.9±0.05 °; Fig. 5c,d). Differences between methods decrease when examining times with high-confidence phase estimates (FIR-Hilbert and state space: 3.04±0.04 ; FIR-Hilbert and Poincaré: 23.9±0.1 ; State space and Poincaré: 24.8±0.1 ; Fig. 5c,d).

We conclude that for nonsinusoidal signals with multiple waveforms, as exemplified here by the Fitzhugh–Nagumo model, phase estimates from the different methods tend to agree. As in the previous two simulation studies, considering only time-indices where we have high confidence in the phase estimates improves similarity across all methods, including to the Poincaré method. However, an important distinction between methods becomes more apparent in this simulation; the state space and FIR-Hilbert methods yield nonuniform phase evolution, i.e., a nonlinear ramp in the phase (see Fig. 5b, orange rectangle). By contrast, the Poincaré method necessarily produces linear increases in the phase. Depending on the desired interpretation of phase, as beginning to evolve only when the voltage begins to rise (state space and FIR-Hilbert methods) as opposed to evolving purely as a function of time (Poincaré method), different methods are appropriate.

Confidence in phase estimates within method does not necessarily mirror phase differences across methods

In the preceding analyses, we considered both within method measures of confidence and between method differences. We now consider the importance of applying both approaches. To do so, we analyze the simulated data at different confidence/credible interval thresholds. For each threshold, we select the subset of data with confidence/credible intervals below the threshold; when the threshold is low, we are selecting only the data with the most confident phase estimates. For each threshold, we examine the cross-method phase differences for the subsets of data by computing the circular SDs. In some cases, we find a near linear monotonic relationship. For example, for the broadband rhythm in pink noise (Fig. 6a), we find smaller deviations between methods at lower thresholds, i.e., estimate agreement across methods increases with confidence in the phase estimate. However, in other cases, we find a more complex relationship between within method confidence and between method differences. For the AR(2) and Fitzhugh–Nagumo generative models, increasing within method confidence does not necessarily decrease the between method phase difference (Fig. 6b,c); in some cases, the disagreement in phase estimates between methods increases as the within method confidence increases. These results indicate that different methods can yield phase estimates that become more similar, or more dissimilar, with increasing within method confidence. This nonmonotonic relationship and difference across generative models highlights the importance of considering both within method confidence/credible intervals and between method comparisons. Doing so allows a better understanding of the reliability and consistency of phase estimates across methods, as shown here for varying signal-to-noise regimes (Fig. 6a,b) and nonlinear waveforms (Fig. 6c).

Figure 6.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 6.

Cross-method comparisons do not directly map onto within-method confidence. a, Cross-method differences for the broadband rhythm in pink noise. For each pair of methods (see legend in b) we plot the cross-method differences for different subsets of data selected based on (left column) Hilbert confidence limits or (right column) state space credible intervals. b, Cross-method differences for the AR(2). c, Cross-method differences for the Fitzhugh–Nagumo model.

Example in vivo applications

For example applications to in vivo data, we focus on two common analysis approaches that require phase: coherence (Pesaran et al., 2018) and phase-amplitude coupling (Tort et al., 2010; Hyafil et al., 2015; see Materials and Methods). We analyze 400 s of data from two frontal electrodes with visually identifiable rhythms from an anesthetized macaque monkey (Fig. 7a,b; for details, see http://wiki.neurotycho.org/Anesthesia_Task_Details). Extensive analyses of these data have been conducted by Ma et al. (2019) where those authors demonstrated long range (between frontal and parietal regions) coherence in the slow δ (0.25–1.5 Hz) and local phase-amplitude coupling between rhythms in the slow δ (0.25–1.5 Hz) and β bands (defined by Ma and colleagues as 15–25 Hz).

Figure 7.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 7.

Coherence estimates increase when incorporating a measure of uncertainty in the phase. a, Power spectra from multitaper estimation for the two recordings analyzed. Inset, Macaque brain showing location of recordings. b, Example traces of data from the two recordings analyzed. Note the large amplitude, slow (∼0.5 Hz) oscillation. c, Estimates of coherence (numerical value) and the phase differences between recordings (histograms) under different methods and when considering all data (upper row) and only the high confidence segments of data (lower row). A bimodal distribution of phase differences appears for the FIR-Hilbert and state space methods for the high confidence segments. d, Coherence estimates from subsets of data based on thresholds computed from the credible/confidence intervals.

Analysis of coherence

We apply all three phase estimation methods to compute the coherence in the slow δ band. We compare coherence results from two sets of data: all the data, and a subset of the data with “high confidence” in the phase estimates. We define “high confidence” intervals as times included when thresholding for the lower quantile of confidence bounds for the FIR-Hilbert method or lower quantile of credible intervals for the state space method. Using either the FIR-Hilbert or the state space method leads to a sharp increase in the estimated coherence for the high-confidence phase data (coherence 0.7) compared with all data (coherence 0.3). We note that we cannot compare the coherence estimate using the Poincaré method to estimates from the other methods; the Poincaré method does not estimate an amplitude time series (see Materials and Methods). However, we can compare the coherence estimated using the Poincaré method (see Materials and Methods, Analysis of in vivo data) across the two subsets of data. In this case, we define high confidence intervals as times at which FIR-Hilbert and state space phase estimators are simultaneously associated with low statistical uncertainty. We again find that the coherence increases from 0.15 for all data to 0.58 for the high confidence intervals of data (Fig. 7c, right column). Examining the coherence estimates for subsets of data across a range of percentile thresholds for the credible/confidence intervals, we find that the coherence increases as the percentile threshold decreases; as confidence in the phase increases, so does the coherence. Because the coherence results can depend on the percentile threshold, we recommend reporting results across the range of percentile thresholds. We conclude that, for all three methods, the estimated coherence increases during time indices of high confidence phase estimates. This conclusion is not obvious; during time indices with high-confidence phase estimates, the phases and amplitudes for the slow δ rhythm could remain randomly distributed between recordings. In addition, we note that isolating the high confidence phase data produced bimodal distributions for the phase differences between electrodes with the FIR-Hilbert and state space methods, a result not clear when analyzing all data (Fig. 7c). A bimodal phase-difference distribution (as seen in Fig. 7c) with one mode (peak) near 0° could imply a common source acting on both areas (Nunez and Srinivasan, 2006), and another mode near 30° could reflect a delayed signal propagating between electrodes. We conclude that coherence estimates can be better interpreted by using segments of data with confident phase estimates.

Phase-amplitude coupling

In a second example analysis of the in vivo data, we use only the FIR-Hilbert and state space methods to estimate the phase-amplitude coupling between the slow δ phase and β amplitude (see Materials and Methods, Analysis of in vivo data). We note that the Poincaré method is unusable for phase-amplitude coupling estimation because of the lack of an estimated amplitude time series. We expect the low-frequency δ phase to influence the high-frequency β amplitude (Ma et al., 2019). We observe this effect when analyzing all of the data (Fig. 8), and find that considering only subsets of data with high confidence in the phase estimate increases the effect size (Fig. 8c); there is greater modulation (ratio between the maximum and minimum mean amplitudes across all phase bins) of the amplitude of the β band rhythm as a function of the slow δ phase when considering only intervals with high confidence phase estimates. However, for small percentile thresholds (i.e., when confidence in the phase estimate is high, and smaller subsets of data are used for analysis), we can visually identify fluctuations in the modulation as less data are included to create the phase-amplitude plot. Like the coherence results, we recommend reporting phase-amplitude coupling metrics across the range of percentile thresholds. These phase-amplitude coupling results are consistent across the FIR-Hilbert and state space methods, demonstrating the robustness of this effect to model misspecification.

Figure 8.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 8.

Cross-frequency coupling estimates are improved when incorporating a measure of uncertainty in the phase. a, Example traces of the slow δ and β signals showing timing of β rhythms with specific phases of slow δ at each electrode location (inset image with arrow). The δ trace is colored according to β amplitude (see color bar). b, Estimates of phase-amplitude coupling when using the state space method (red, shaded confidence bands) and FIR-Hilbert method (blue, shaded confidence bands) for each recording. Both methods demonstrate significant phase-amplitude coupling between slow δ phase and β amplitude. When only using segments of data with high confidence phase (dashed curves), the modulation increases. c, Cross-frequency coupling strength or “modulation” (the ratio between the minimum and maximum normalized mean amplitudes) estimated from subsets of data based on thresholds computed from the credible/confidence intervals.

Discussion

In this manuscript, we demonstrated the ambiguity in specifying phase for neuroscientific applications. We used simulations from three empirically valid generative models of rhythms and applied three methods to specify phase. Together, these models and methods encompass a wide range of empirical data and methods applied (Bruns, 2004; Ouedraogo et al., 2016; Cole and Voytek, 2017; Donoghue et al., 2020; Wodeyar et al., 2021; Spyropoulos et al., 2022). We showed that for each model of rhythms, each method produced phase estimates that optimally tracked different characteristics of the rhythms. These phase estimates converge when using only segments of data when the methods suggested greater confidence in the phase estimate. Using this insight, we applied two common analyses, coherence and phase-amplitude coupling, to example in vivo recordings. Considering only the subset of data with high confidence phase estimates, we found increases in both the coherence and phase-amplitude coupling, and a bimodal phase-difference distribution for the coherence that was not apparent in the original analysis. We conclude that phase is ambiguous (i.e., application specific) for neural data but using uncertainty metrics reduces the ambiguity.

Every method to specify phase imposes a model of the data time series. In practice, phase estimation is performed without careful consideration of the model (implicitly) selected. For the estimation methods considered here, we propose a decision tree to guide practitioners in choosing a phase estimator. In essence, the method chosen depends on the goal of the analysis. In the ideal case of sufficiently sampled sinusoidal data (Lainscsek et al., 2017) and no noise process, the FIR-Hilbert, state space, and Poincaré methods are all viable to identify and track inferable rhythms. If the data time series periodicity is nearly sinusoidal and noise contamination is low, then both the FIR-Hilbert and state space methods generate similar phase estimates and are not susceptible to high-frequency fluctuations like those seen using the Poincaré method. Alternatively, if there is amplitude modulation, or if the data are nonsinusoidal and noise contamination is high and the goal is to identify phase for periods with high SNR waves, then the state space method is the best option. Indeed, we note that most neural data contain noise and amplitude modulation. Finally, if the data are nonsinusoidal and the noise contamination is low, then the Poincaré method performs better at maintaining a linear ramp for the phase compared with the FIR-Hilbert and state space methods.

When phase estimation confidence is high, we typically assume that the method used to estimate phase describes the data accurately. However, high confidence does not necessarily indicate that the method accurately models the data. For instance, when a large amplitude, sudden change (i.e., a spike) occurs in the voltage time series, the Hilbert transform confidence will be high, but the FIR-Hilbert does not provide an accurate model of the data; using this method, the “phase” of the spike will depend on the filter passband. In this case, cross-method comparisons may indicate high variability across methods despite the high confidence. Low confidence in the phase estimate may occur because of an inappropriate model choice or data that do not capture identifiable rhythms (e.g., because of noise or geometry of sources). In such cases, careful inspection of the data (e.g., additional descriptive analyses to confirm the presence of a rhythm) and consideration of other phase estimation methods may help confirm whether the phase can be defined as desired by the practitioner.

The features tracked by the phase are least dependent on the experimenter’s choice of estimation method when the similarity between methods is high. Across different models of rhythms, we found uncertainty metrics help in identifying segments of time when phase is similar across methods. This similarity across methods does not map linearly to the metrics of uncertainty but depends on the model studied (see Results, Confidence in phase estimates within method does not necessarily mirror phase differences across methods). However, the metrics of uncertainty are specific to the model assumed by each method. The uncertainty metric under the FIR-Hilbert method is a transformation of the amplitude time series for the data passed through a narrowband filter. One can identify instances when this measure of uncertainty does not represent features of the rhythm of interest. For example, when the time series is contaminated with 1/fα noise (see Results, Broadband rhythm in pink noise), high amplitude activity may appear because of noise rather than the rhythm of interest. In contrast, segments of time with low SNR rhythms can have an amplitude that fall below a given amplitude-dependent uncertainty threshold (see Materials and Methods, Eq. 2). Alternatively, the state space model assumes that the analytic signal representing the rhythm process is complex-Gaussian distributed and applies the Kalman filter before quantifying the uncertainty in the phase. When the signal is nonsinusoidal or the noise driving the rhythm is non-Gaussian, the credible intervals from the state space model may fail to capture confidence in the phase accurately, e.g., visible at the start of each phase cycle for high amplitude events in the Fitzhugh–Nagumo simulation (Fig. 5, orange rectangle). Modifications of the state space model (such as the application of an extended Kalman filter to independently track amplitude and phase or the use of particle filters; Schiff, 2012) can permit more accurate modeling of nonsinusoidal rhythms and non-Gaussian noise.

While we focused on three methods to specify phase, other methods exist. We highlight a few alternatives. Sophisticated approaches to phase estimation exist in the mathematical physics literature, e.g., analysis of the mean first return time (Schwabedal and Pikovsky, 2013; Cao et al., 2020) or slowest decaying modes of the Kolmogorov backward operator (Thomas and Lindner, 2014). As these approaches are debated (Pikovsky, 2015; Thomas and Lindner, 2015), and complementary approaches developed (Engel and Kuehn, 2021), we expect their adoption and application in neuroscience will provide additional strategies to estimate phase from neural data. A recently introduced approach, the generalized phase method (Davis et al., 2022), estimates the smoothed Hilbert-transform derived phase of the largest fluctuation in the data at a given time instant. This method prioritizes the rhythm with highest amplitude present in the time series and assumes only a single rhythm is present at any instant in time. Given the similarity of this method to the basic Hilbert transform and the intent to examine distinct types of phase estimation methods, we chose not to apply this method here. Another method, the masked empirical mode decomposition or masked EMD (Quinn et al., 2021), seeks to identify distinct waveforms in the time series. Such a method could closely track an inferable rhythm while reducing the contamination of noise and create a phase evolution that linearly tracks a nonsinusoidal rhythm. The masked EMF may struggle with identifying a singular waveform if the underlying limit-cycle attractor of the dynamical system measured is chaotic (as seen in the Fitzhugh–Nagumo simulation), leading to wave morphology that is inconsistent across cycles.

A major difficulty in identifying the best phase estimation method is that the quantity defined as phase depends on the goals of the practitioner. Thus, the idea of a “best” phase estimator is not meaningful. Instead, we recommend alternative approaches of comparing estimators, for example by comparing phase from different methods to identify consistency or testing which method of phase estimation shows the highest effect size for the feature of interest. For example, Davis and colleagues (Davis et al., 2022) compare the phase from the FIR-Hilbert method and that under the generalized phase method to the multiunit neuronal spiking rate, demonstrating increased spike-field coherence under the generalized phase method. However, choosing the phase across methods that maximizes the relationship to neuronal activity or behavior may suffer from “voodoo correlations” (Vul et al., 2009): increases in effect size because of random chance in the method selected. Similarly, the removal of data with low confidence should be done cautiously and with a thorough understanding of the possible implications on the overall conclusions, including voodoo correlations. Further, when presenting the results, it is important to clearly communicate the limitations of the analysis and the potential impact of the removed data on the conclusions drawn. We recommend that studies present the results of phase analysis across a continuum of thresholds on the uncertainty metrics to demonstrate the implications of analyzing different subsets of data (as shown in Figs. 7d, 8c). Doing so may reveal facets of the data, for example, the presence of bi-modality in the phase differences, that may otherwise be obscured by inaccurate phase estimates (Fig. 7c). The true efficacy of a method to define phase that tracks the desired feature of population neural activity (such as the peaks or troughs) could be verified by causal approaches, such as stimulation.

Closed-loop approaches to noninvasive (Ngo et al., 2013; Klinzing et al., 2021) and invasive (Fröhlich and McCormick, 2010; Lustenberger et al., 2016; Zrenner et al., 2018) stimulation based on a signal’s phase require methods for real-time phase estimation. For example, auditory stimulation triggered by the phase of a slow-wave oscillation improves memory (Ngo et al., 2013) and thalamic stimulation triggered by a patient’s tremor rhythm suppresses tremor in patients with Parkinson’s disease (Cagnan et al., 2017). The Poincaré and FIR-Hilbert methods do not directly support causal phase estimation, precluding their real-time application. However, several methods have been developed to perform real-time phase estimation using the Hilbert transform by first forecasting data under autoregressive models (Blackwood et al., 2018; Schatza et al., 2022), and a similar approach may support real-time implementation of the Poincaré method. The state space method is easily translated to a real-time framework (Wodeyar et al., 2021). Finally, similar to what we showed in the in vivo analyses, measures of uncertainty can help guide closed-loop stimulation to times of greater certainty in the phase estimate (McNamara et al., 2022; Schatza et al., 2022).

We note that in this manuscript we focused on the case of a single rhythm. However, the state space method supports simultaneous estimation of oscillators for multiple rhythms (Wodeyar et al., 2021; He et al., 2022). By applying the state space method to estimate multiple oscillators, we expect the state space method could provide a principled approach to phase estimation across multiple underlying oscillation patterns. The likelihood derived from the state space model can serve as an explicit indicator of which oscillation is dominant at any given time. This approach can help mitigate the risks associated with censoring data based on confidence levels and offer, for example, a more accurate representation of the coupling between spikes and neural signals in the presence of multiple rhythms. In this way, state space models may contribute to a more comprehensive analysis of the data, considering the dynamic presence and contribution of different rhythms, and ensuring that the conclusions drawn are not biased by an overemphasis on high-confidence estimates.

Phase is an important characteristic of neural activity that may be used to index cortical excitability (Freeman, 1961), availability of attentional resources (Busch et al., 2009; Fiebelkorn et al., 2019), and the neural population capacity for communication (Fries, 2015). As such, knowledge of how different phase estimators define phase is critical to address mechanistic questions in neuroscience. In this manuscript, we proposed guidelines to identify the appropriate phase estimator, namely applying estimation methods whose reconstructed phase tracks the relevant characteristics of the rhythm of interest and using metrics of statistical uncertainty. Improved approaches to phase estimation will help us better understand the network implications of rhythms and ascertain the potential for noninvasive interventions that alter cognitive ability (Ngo et al., 2013) or reduce clinical symptoms (Cagnan et al., 2017).

Footnotes

  • The authors declare no competing financial interests.

  • A.W., M.A.K., F.A.M., and U.T.E. were partially supported by National Institutes of Health Grants R01NS110669 and R01EB026938.

This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license, which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.

References

  1. ↵
    Bedrosian E (1963) A product theorem for Hilbert transforms. Proc IEEE 51:868–869.
    OpenUrl
  2. ↵
    Bernardi G, Betta M, Ricciardi E, Pietrini P, Tononi G, Siclari F (2019) Regional delta waves in human rapid eye movement sleep. J Neurosci 39:2686–2697. https://doi.org/10.1523/JNEUROSCI.2298-18.2019 pmid:30737310
    OpenUrlAbstract/FREE Full Text
  3. ↵
    Blackwood E, Lo M, Widge AS (2018) Continuous phase estimation for phase-locked neural stimulation using an autoregressive model for signal prediction. Annu Int Conf IEEE Eng Med Biol Soc 2018 4736–4739. https://doi.org/10.1109/EMBC.2018.8513232 pmid:30441407
    OpenUrlPubMed
  4. ↵
    Boashash B (1991) Estimating and interpreting the instantaneous frequency of a signal–part 1: fundamentals. Proc IEEE 80:520–538.
    OpenUrl
  5. ↵
    Bokil H, Andrews P, Kulkarni JE, Mehta S, Mitra PP (2010) Chronux: a platform for analyzing neural signals. J Neurosci Methods 192:146–151. https://doi.org/10.1016/j.jneumeth.2010.06.020 pmid:20637804
    OpenUrlCrossRefPubMed
  6. ↵
    Bruns A (2004) Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches? J Neurosci Methods 137:321–332. https://doi.org/10.1016/j.jneumeth.2004.03.002 pmid:15262077
    OpenUrlCrossRefPubMed
  7. ↵
    Burns SP, Xing D, Shelley MJ, Shapley RM (2010) Searching for autocoherence in the cortical network with a time-frequency analysis of the local field potential. J Neurosci 30:4033–4047. https://doi.org/10.1523/JNEUROSCI.5319-09.2010 pmid:20237274
    OpenUrlAbstract/FREE Full Text
  8. ↵
    Busch NA, Dubois J, VanRullen R (2009) The phase of ongoing EEG oscillations predicts visual perception. J Neurosci 29:7869–7876. https://doi.org/10.1523/JNEUROSCI.0113-09.2009 pmid:19535598
    OpenUrlAbstract/FREE Full Text
  9. ↵
    Buzsaki G (2011) Rhythms of the brain. New York, NY: Oxford University.
  10. ↵
    Cagnan H, Pedrosa D, Little S, Pogosyan A, Cheeran B, Aziz T, Green A, Fitzgerald J, Foltynie T, Limousin P, Zrinzo L, Hariz M, Friston KJ, Denison T, Brown P (2017) Stimulating at the right time: phase-specific deep brain stimulation. Brain 140:132–145. https://doi.org/10.1093/brain/aww286 pmid:28007997
    OpenUrlCrossRefPubMed
  11. ↵
    Canolty RT, Knight RT (2010) The functional role of cross-frequency coupling. Trends Cogn Sci 14:506–515. https://doi.org/10.1016/j.tics.2010.09.001 pmid:20932795
    OpenUrlCrossRefPubMed
  12. ↵
    Cao A, Lindner B, Thomas PJ (2020) A partial differential equation for the mean-return-time phase of planar stochastic oscillators. SIAM Journal on Applied Mathematics 80:422–447.
    OpenUrl
  13. ↵
    Chavez M, Besserve M, Adam C, Martinerie J (2006) Towards a proper estimation of phase synchronization from time series. J Neurosci Methods 154:149–160. https://doi.org/10.1016/j.jneumeth.2005.12.009 pmid:16445988
    OpenUrlCrossRefPubMed
  14. ↵
    Cohen MX (2008) Assessing transient cross-frequency coupling in EEG data. J Neurosci Methods 168:494–499. https://doi.org/10.1016/j.jneumeth.2007.10.012 pmid:18061683
    OpenUrlCrossRefPubMed
  15. ↵
    Cole SR, Voytek B (2017) Brain oscillations and the importance of waveform shape. Trends Cogn Sci 21:137–149. https://doi.org/10.1016/j.tics.2016.12.008 pmid:28063662
    OpenUrlCrossRefPubMed
  16. ↵
    Davis ZW, Muller L, Reynolds JH (2022) Spontaneous spiking is governed by broadband fluctuations. J Neurosci 42:5159–5172. https://doi.org/10.1523/JNEUROSCI.1899-21.2022 pmid:35606140
    OpenUrlAbstract/FREE Full Text
  17. ↵
    Donoghue T, Haller M, Peterson EJ, Varma P, Sebastian P, Gao R, Noto T, Lara AH, Wallis JD, Knight RT, Shestyuk A, Voytek B (2020) Parameterizing neural power spectra into periodic and aperiodic components. Nat Neurosci 23:1655–1665. https://doi.org/10.1038/s41593-020-00744-x pmid:33230329
    OpenUrlCrossRefPubMed
  18. ↵
    Engel AK, Pascal F (2010) Beta-band oscillations-signalling the status quo? Curr Opin Neurobiol 20:156–65. pmid:20359884
    OpenUrlCrossRefPubMed
  19. ↵
    Engel M, Kuehn C (2021) A random dynamical systems perspective on isochronicity for stochastic oscillations. Commun Math Phys 386:1603–1641.
    OpenUrl
  20. ↵
    Fiebelkorn IC, Pinsk MA, Kastner S (2018) A dynamic interplay within the frontoparietal network underlies rhythmic spatial attention. Neuron 99:842–853.e8. pmid:30138590
    OpenUrlCrossRefPubMed
  21. ↵
    Fiebelkorn IC, Pinsk MA, Kastner S (2019) The mediodorsal pulvinar coordinates the macaque fronto-parietal network during rhythmic spatial attention. Nat Commun 10:215. https://doi.org/10.1038/s41467-018-08151-4 pmid:30644391
    OpenUrlCrossRefPubMed
  22. ↵
    Franaszczuk PJ, Blinowska KJ (1985) Linear model of brain electrical activity—EEG as a superposition of damped oscillatory modes. Biol Cybern 53:19–25.
    OpenUrlCrossRefPubMed
  23. ↵
    Freeman WJ (1961) Harmonic oscillation as model for cortical excitability changes with attention in cats. Science 133:2058–2059. https://doi.org/10.1126/science.133.3470.2058 pmid:13701652
    OpenUrlAbstract/FREE Full Text
  24. ↵
    Fries P (2015) Rhythms for cognition: communication through coherence. Neuron 88:220–235. https://doi.org/10.1016/j.neuron.2015.09.034 pmid:26447583
    OpenUrlCrossRefPubMed
  25. ↵
    Fröhlich F, McCormick DA (2010) Endogenous electric fields may guide neocortical network activity. Neuron 67:129–143. https://doi.org/10.1016/j.neuron.2010.06.005 pmid:20624597
    OpenUrlCrossRefPubMed
  26. ↵
    He M, Das P, Hotan G, Purdon PL (2022) Automatic segmentation of sleep spindles: a variational switching state-space approach. 56th asilomar conference on signals, systems, and computers, Pacific Grove, CA, pp. 1301–1305. Available at https://doi.org/10.1109/IEEECONF56349.2022.10052015.
  27. ↵
    Helfrich RF, Mander BA, Jagust WJ, Knight RT, Walker MP (2018) Old brains come uncoupled in sleep: slow wave-spindle synchrony, brain atrophy, and forgetting. Neuron 97:221–230.e4. pmid:29249289
    OpenUrlCrossRefPubMed
  28. ↵
    Herbst SK, Stefanics G, Obleser J (2022) Endogenous modulation of delta phase by expectation–A replication of Stefanics et al., 2010. Cortex 149:226–245. https://doi.org/10.1016/j.cortex.2022.02.001 pmid:35290859
    OpenUrlPubMed
  29. ↵
    Hong KS (2011) Synchronization of coupled chaotic FitzHugh–Nagumo neurons via Lyapunov functions. Mathematics and Computers in Simulation 82:590–603.
    OpenUrlCrossRef
  30. ↵
    Hyafil A, Giraud AL, Fontolan L, Gutkin B (2015) Neural cross-frequency coupling: connecting architectures, mechanisms, and functions. Trends Neurosci 38:725–740. https://doi.org/10.1016/j.tins.2015.09.001 pmid:26549886
    OpenUrlCrossRefPubMed
  31. ↵
    Izhikevich E (2007) Dynamical systems in Neuroscience. Cambridge, MS: MIT.
  32. ↵
    Izhikevich EM, FitzHugh R (2006) FitzHugh-Nagumo model. Scholarpedia. Available at: http://www.scholarpedia.org/article/FitzHugh-Nagumo_model (last accessed January 13, 2022).
  33. ↵
    Jarvis MR, Mitra PP (2001) Sampling properties of the spectrum and coherency of sequences of action potentials. Neural Comput 13:717–749. https://doi.org/10.1162/089976601300014312 pmid:11255566
    OpenUrlCrossRefPubMed
  34. ↵
    Jones SR (2016) When brain rhythms aren’t ‘rhythmic’: implication for their mechanisms and meaning. Curr Opin Neurobiol 40:72–80. https://doi.org/10.1016/j.conb.2016.06.010 pmid:27400290
    OpenUrlCrossRefPubMed
  35. ↵
    Klinzing JG, Tashiro L, Ruf S, Wolff M, Born J, Ngo H-VV (2021) Auditory stimulation during sleep suppresses spike activity in benign epilepsy with centrotemporal spikes. Cell Rep Med 2:100432. https://doi.org/10.1016/j.xcrm.2021.100432 pmid:34841286
    OpenUrlPubMed
  36. ↵
    Kralemann B, Cimponeriu L, Rosenblum M, Pikovsky A, Mrowka R (2007) Uncovering interaction of coupled oscillators from data. Phys Rev E Stat Nonlin Soft Matter Phys 76:e055201. https://doi.org/10.1103/PhysRevE.76.055201 pmid:18233706
    OpenUrlPubMed
  37. ↵
    Kramer MA (2022) Golden rhythms as a theoretical framework for cross-frequency organization. Neurons Behav Data Anal Theory 1:1–23. https://doi.org/10.51628/001c.38960
    OpenUrl
  38. ↵
    Lachaux JP, Rodriguez E, Martinerie J, Varela FJ (1999) Measuring phase synchrony in brain signals. Hum Brain Mapp 8:194–208. https://doi.org/10.1002/(SICI)1097-0193(1999)8:4<194::AID-HBM4>3.0.CO;2-C
    OpenUrlCrossRefPubMed
  39. ↵
    Lainscsek C, Muller LE, Sampson AL, Sejnowski TJ (2017) Analytical derivation of nonlinear spectral effects and 1/f scaling artifact in signal processing of real-world data. Neural Computation 29:2004–2020.
    OpenUrl
  40. ↵
    Lepage KQ, Kramer MA, Eden UT (2011) The dependence of spike field coherence on expected intensity. Neural Comput 23:2209–2241. https://doi.org/10.1162/NECO_a_00169 pmid:21671792
    OpenUrlCrossRefPubMed
  41. ↵
    Lepage KQ, Kramer MA, Eden UT (2013) Some sampling properties of common phase estimators. Neural Comput 25:901–921. https://doi.org/10.1162/NECO_a_00422 pmid:23339610
    OpenUrlPubMed
  42. ↵
    Lewis LD, Weiner VS, Mukamel EA, Donoghue JA, Eskandar EN, Madsen JR, Anderson WS, Hochberg LR, Cash SS, Brown EN, Purdon PL (2012) Rapid fragmentation of neuronal networks at the onset of propofol-induced unconsciousness. Proc Natl Acad Sci U S A 109:E3377–E3386. https://doi.org/10.1073/pnas.1210907109 pmid:23129622
    OpenUrlAbstract/FREE Full Text
  43. ↵
    Lustenberger C, Boyle MR, Alagapan S, Mellin JM, Vaughn BV, Fröhlich F (2016) Feedback-controlled transcranial alternating current stimulation reveals a functional role of sleep spindles in motor memory consolidation. Curr Biol 26:2127–2136. https://doi.org/10.1016/j.cub.2016.06.044 pmid:27476602
    OpenUrlCrossRefPubMed
  44. ↵
    Ma L, Liu W, Hudson AE (2019) Propofol anesthesia increases long-range frontoparietal corticocortical interaction in the oculomotor circuit in macaque monkeys. Anesthesiology 130:560–571. https://doi.org/10.1097/ALN.0000000000002637 pmid:30807382
    OpenUrlPubMed
  45. ↵
    Matsuda T, Komaki F (2017) Time series decomposition into oscillation components and phase estimation. Neural Comput 29:332–367. https://doi.org/10.1162/NECO_a_00916 pmid:27870609
    OpenUrlPubMed
  46. ↵
    McNamara CG, Rothwell M, Sharott A (2022) Stable, interactive modulation of neuronal oscillations produced through brain-machine equilibrium. Cell Rep 41:111616. https://doi.org/10.1016/j.celrep.2022.111616 pmid:36351413
    OpenUrlPubMed
  47. ↵
    Miller KJ, Zanos S, Fetz EE, Den Nijs M, Ojemann JG (2009a) Decoupling the cortical power spectrum reveals real-time representation of individual finger movements in humans. J Neurosci 29:3132–3137. https://doi.org/10.1523/JNEUROSCI.5506-08.2009 pmid:19279250
    OpenUrlAbstract/FREE Full Text
  48. ↵
    Miller KJ, Sorensen LB, Ojemann JG, Den Nijs M (2009b) Power-law scaling in the brain surface electric potential. PLoS Comput Biol 5:e1000609. https://doi.org/10.1371/journal.pcbi.1000609 pmid:20019800
    OpenUrlCrossRefPubMed
  49. ↵
    Morey RD, Hoekstra R, Rouder JN, Lee MD, Wagenmakers EJ (2016) The fallacy of placing confidence in confidence intervals. Psychon Bull Rev 23:103–123. https://doi.org/10.3758/s13423-015-0947-8 pmid:26450628
    OpenUrlCrossRefPubMed
  50. ↵
    Nadalin JK, Martinet LE, Blackwood EB, Lo MC, Widge A, Cash SS, Eden UT, Kramer MA (2019) A statistical framework to assess cross-frequency coupling while accounting for confounding analysis effects. Elife 8:e44287. https://doi.org/10.7554/eLife.44287
    OpenUrl
  51. ↵
    Navas-Olive A, Valero M, Jurado-Parras T, de Salas-Quiroga A, Averkin RG, Gambino G, Cid E, de la Prida LM (2020) Multimodal determinants of phase-locked dynamics across deep-superficial hippocampal sublayers during theta oscillations. Nat Commun 11:2217. https://doi.org/10.1038/s41467-020-15840-6 pmid:32371879
    OpenUrlCrossRefPubMed
  52. ↵
    Ngo H-VV, Martinetz T, Born J, Mölle M (2013) Auditory closed-loop stimulation of the sleep slow oscillation enhances memory. Neuron 78:545–553. https://doi.org/10.1016/j.neuron.2013.03.006 pmid:23583623
    OpenUrlCrossRefPubMed
  53. ↵
    Nozari E, Bertolero MA, Stiso J, Caciagli L, Cornblath EJ, He X, Mahadevan AS, Pappas GJ, Bassett DS (2021) Is the brain macroscopically linear? A system identification of resting state dynamics bioRxiv 423856. https://doi.org/10.1101/2020.12.21.423856.
  54. ↵
    Nunez PL, Srinivasan R (2006) Electric fields of the brain: the neurophysics of EEG. Oxford: Oxford University Press.
  55. ↵
    Ouedraogo DW, Lenck-Santini P-P, Marti G, Robbe D, Crépel V, Epsztein J (2016) Abnormal up/down membrane potential dynamics coupled with the neocortical slow oscillation in dentate granule cells during the latent phase of temporal lobe epilepsy. eNeuro 3:ENEURO.0017-16.2016. https://doi.org/10.1523/ENEURO.0017-16.2016
    OpenUrl
  56. ↵
    Pesaran B, Vinck M, Einevoll GT, Sirota A, Fries P, Siegel M, Truccolo W, Schroeder CE, Srinivasan R (2018) Investigating large-scale brain dynamics using field potential recordings: analysis and interpretation. Nat Neurosci 21:903–919. https://doi.org/10.1038/s41593-018-0171-8 pmid:29942039
    OpenUrlCrossRefPubMed
  57. ↵
    Petersen PC, Buzsáki G (2020) Cooling of medial septum reveals theta phase lag coordination of hippocampal cell assemblies. Neuron 107:731–744.e3. https://doi.org/10.1016/j.neuron.2020.05.023 pmid:32526196
    OpenUrlCrossRefPubMed
  58. ↵
    Pikovsky A (2015) Comment on “Asymptotic Phase for Stochastic Oscillators”. Phys Rev Lett 115:069401.
    OpenUrl
  59. ↵
    Quinn AJ, Lopes-dos-Santos V, Huang N, Liang W-K, Juan C-H, Yeh J-R, Nobre AC, Dupret D, Woolrich MW (2021) Within-cycle instantaneous frequency profiles report oscillatory waveform dynamics. J Neurophysiol 126:1190–1208. https://doi.org/10.1152/jn.00201.2021 pmid:34406888
    OpenUrlCrossRefPubMed
  60. ↵
    Ray S, Crone NE, Niebur E, Franaszczuk PJ, Hsiao SS (2008) Neural correlates of high-gamma oscillations (60–200 Hz) in macaque local field potentials and their potential implications in electrocorticography. J Neurosci 28:11526–11536. https://doi.org/10.1523/JNEUROSCI.2848-08.2008 pmid:18987189
    OpenUrlAbstract/FREE Full Text
  61. ↵
    Roberts MJ, Lowet E, Brunet NM, Ter Wal M, Tiesinga P, Fries P, De Weerd P (2013) Robust gamma coherence between macaque V1 and V2 by dynamic frequency matching. Neuron 78:523–536. https://doi.org/10.1016/j.neuron.2013.03.003 pmid:23664617
    OpenUrlCrossRefPubMed
  62. ↵
    Rosenblum MG, Pikovsky AS, Kurths J (1997) Phase synchronization in driven and coupled chaotic oscillators. IEEE Trans Circuits Syst I 44:874–881. https://doi.org/10.1109/81.633876
    OpenUrl
  63. ↵
    Sameni R, Seraj E (2017) A robust statistical framework for instantaneous electroencephalogram phase and frequency estimation and analysis. Physiol Meas 38:2141–2163.
    OpenUrl
  64. ↵
    Schatza MJ, Blackwood EB, Nagrale SS, Widge AS (2022) Toolkit for oscillatory real-time tracking and estimation (TORTE). J Neurosci Methods 366:109409. https://doi.org/10.1016/j.jneumeth.2021.109409 pmid:34788695
    OpenUrlPubMed
  65. ↵
    Schwabedal JTC, Pikovsky A (2013) Phase description of stochastic oscillations. Phys Rev Lett 110:204102.
    OpenUrl
  66. ↵
    Schiff SJ (2012) Neural control engineering: the emerging intersection between control theory and neuroscience. Cambridge: MIT Press.
  67. ↵
    Shumway RH, Stoffer DS (1982) An approach to time series smoothing and forecasting using the em algorithm. J Time Series Analysis 3:253–264. https://doi.org/10.1111/j.1467-9892.1982.tb00349.x
    OpenUrlCrossRef
  68. ↵
    Shumway RH, Stoffer DS (2017) ARIMA models. In: Time series analysis and its applications: with R examples (Shumway RH and Stoffer DS, eds), pp 75–163. New York: Springer International Publishing.
  69. ↵
    Soulat H, Stephen EP, Beck AM, Purdon PL (2022) State space methods for phase amplitude coupling analysis. Sci Rep 12:15940. https://doi.org/10.1038/s41598-022-18475-3 pmid:36153353
    OpenUrlPubMed
  70. ↵
    Spyropoulos G, Saponati M, Dowdall JR, Schölvinck ML, Bosman CA, Lima B, Peter A, Onorato I, Klon-Lipok J, Roese R, Neuenschwander S, Fries P, Vinck M (2022) Spontaneous variability in gamma dynamics described by a damped harmonic oscillator driven by noise. Nat Commun 13:2019. https://doi.org/10.1038/s41467-022-29674-x pmid:35440540
    OpenUrlCrossRefPubMed
  71. ↵
    Takens F (1981) Detecting strange attractors in turbulence. In: Dynamical systems and turbulence, Warwick 1980 (Rand D and Young LS, eds), pp 366–381. New York: Springer.
  72. ↵
    Thomas PJ, Lindner B (2015) Thomas and Lindner reply. Phys Rev Lett 115:069402.
    OpenUrl
  73. ↵
    Thomas PJ, Lindner B (2014) Asymptotic phase for stochastic oscillators. Phys Rev Lett 113:254101.
    OpenUrl
  74. ↵
    Tort ABL, Komorowski R, Eichenbaum H, Kopell N (2010) Measuring phase-amplitude coupling between neuronal oscillations of different frequencies. J Neurophysiol 104:16.
    OpenUrl
  75. ↵
    van Ede F, Quinn AJ, Woolrich MW, Nobre AC (2018) Neural oscillations: sustained rhythms or transient burst-events?. Trends Neurosci 41:415–417.
    OpenUrl
  76. ↵
    Vul E, Harris C, Winkielman P, Pashler H (2009) Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspect Psychol Sci 4:274–290. https://doi.org/10.1111/j.1745-6924.2009.01125.x pmid:26158964
    OpenUrlCrossRefPubMed
  77. ↵
    VanRullen R (2016) Perceptual cycles. Trends Cogn Sci 20:723–735. https://doi.org/10.1016/j.tics.2016.07.006 pmid:27567317
    OpenUrlCrossRefPubMed
  78. ↵
    Widge AS, Miller EK (2019) Targeting cognition and networks through neural oscillations: next-generation clinical brain stimulation. JAMA Psychiatry 76:671–672. https://doi.org/10.1001/jamapsychiatry.2019.0740 pmid:31116372
    OpenUrlPubMed
  79. ↵
    Widmann A, Schröger E, Maess B (2015) Digital filter design for electrophysiological data—a practical approach. J Neurosci Methods 250:34–46. https://doi.org/10.1016/j.jneumeth.2014.08.002 pmid:25128257
    OpenUrlCrossRefPubMed
  80. ↵
    Wodeyar A, Srinivasan R (2022) Structural connectome constrained graphical lasso for MEG partial coherence. Network Neuroscience 6:1219–1242.
    OpenUrl
  81. ↵
    Wodeyar A, Schatza M, Widge AS, Eden UT, Kramer MA (2021) A state space modeling approach to real-time phase estimation. Elife 10:e68803. https://doi.org/10.7554/eLife.68803
  82. ↵
    Xing D, Shen Y, Burns S, Yeh C-I, Shapley R, Li W (2012) Stochastic generation of gamma-band activity in primary visual cortex of awake and anesthetized monkeys. J Neurosci 32:13873–13880a. https://doi.org/10.1523/JNEUROSCI.5644-11.2012 pmid:23035096
    OpenUrlAbstract/FREE Full Text
  83. ↵
    Yael D, Vecht JJ, Bar-Gad I (2018) Filter-based phase shifts distort neuronal timing information. eNeuro 5:ENEURO.0261-17.2018. https://doi.org/10.1523/ENEURO.0261-17.2018
    OpenUrl
  84. ↵
    Zanos S, Zanos TP, Marmarelis VZ, Ojemann GA, Fetz EE (2012) Relationships between spike-free local field potentials and spike timing in human temporal cortex. J Neurophysiol 107:1808–1821. https://doi.org/10.1152/jn.00663.2011 pmid:22157112
    OpenUrlCrossRefPubMed
  85. ↵
    Zhong W, Ciatipis M, Wolfenstetter T, Jessberger J, Müller C, Ponsel S, Yanovsky Y, Brankačk J, Tort ABL, Draguhn A (2017) Selective entrainment of gamma subbands by different slow network oscillations. Proc Natl Acad Sci U S A 114:4519–4524. https://doi.org/10.1073/pnas.1617249114 pmid:28396398
    OpenUrlAbstract/FREE Full Text
  86. ↵
    Zrenner C, Desideri D, Belardinelli P, Ziemann U (2018) Real-time EEG-defined excitability states determine efficacy of TMS-induced plasticity in human motor cortex. Brain Stimul 11:374–389. https://doi.org/10.1016/j.brs.2017.11.016 pmid:29191438
    OpenUrlCrossRefPubMed
  87. ↵
    Zrenner C, Galevska D, Nieminen JO, Baur D, Stefanou M, Ziemann U (2020) The shaky ground truth of real-time phase estimation. Neuroimage 214:116761.
    OpenUrlPubMed

Synthesis

Reviewing Editor: Nicholas J. Priebe, University of Texas at Austin

Decisions are customarily a result of the Reviewing Editor and the peer reviewers coming together and discussing their recommendations until a consensus is reached. When revisions are invited, a fact-based synthesis statement explaining their decision and outlining what is needed to prepare a revision will be listed below. The following reviewer(s) agreed to reveal their identity: Zachary Davis.

SYNTHESIS

Two reviewers read the revision of your manuscript “Different methods to estimate the phase of neural rhythms agree, but only during times of low uncertainty”. Both reviewers recognized that the manuscript has improved as many of the issues raised in the previous review were addressed, but critical questions remain about how to estimate the phase of neural activity. The major issue that the reviewers raise is that the present manuscript does not assess the validity of the phase measurements. This was the first question raised in the previous review (“What is the best strategy to estimate phase from neural data?”). This remains a central concern of the reviewers. Both considered a comparison of the quality of phase estimation from simulated data to be essential to properly evaluate the different methods. The reviewers recognized that the authors may not wish to make comparison of the phase extraction from different models and a ground truth, but nonetheless felt that this is a critical component to assess these different methods. Two other major issues were raised in the review process concerning filtering bandwidths and data inclusion/exclusion. These issues are detailed below, which you should find useful in revising your manuscript, if you decide to resubmit it to eNeuro or to another journal.

Major Issues:

1) What is the best strategy to estimate phase from neural data?

The authors have edited the manuscript to make it clear that the analysis is a comparison between phase extraction techniques without a comparison to the ground truth. But the reviewers feel that this is an essential component of the manuscript. The reasoning in the manuscript is that the simulations have no known phase, as phase is not an explicit parameter of the generative model. Nonetheless the reviewers noted that even though phase is not a parameter of the model, it can be extracted from the Hilbert-transformation of the signal before adding noise. If it is not possible to dissociate the signal and pink noise, one might consider an alternative approach to generating the signal. The difference between the original, noiseless phase, and the signal+noise phase is the difference in each method’s ability to estimate the signal of interest’s phase in the presence of noise. Which method does the best at recovering the signal’s phase in the presence of noise?

2) The authors did a thorough job exploring the space of different filtering techniques. The concern, however, regarded the filter bandwidth, which was not explored. How does the filter width, center frequency, and alignment with the instantaneous frequency of the signal relate to phase certainty using the different methods, and is there an optimal filter bandwidth for their different signals? If the filter is poorly aligned to the instantaneous frequency of the signal, is phase confidence low? Could one recover phase at those times by changing the filter bandwidth?

3) What statistics should be used to exclude or include data?

Both high and low uncertainty data are equally valid. It appears that there are two hypotheses for why uncertainty changes: a) When phase uncertainty is high, basically there is no delta or beta in the signal. By eliminating that data, the methods measure the true delta-beta coherence unfettered by all the time delta and beta that are not present. b) Instead, there may be some latent delta or beta signal but removing that data wouldn’t impact the methods because while the delta-beta coherence is present, it is embedded in the noise. These alternatives could be tested by building ground-truth simulations of the two alternatives and asking how the phase-uncertainty analysis changes the results, and does the real data fall under one hypothesis or the other. Finally, there should be a statistical analysis of the results after removing data based on phase uncertainty, or how changing the criterion for inclusion or exclusion impacts those results.

Minor Issues:

1) Equation 7, as written, will always equal 1. F1 and F2 are not the same in the numerator and denominator and should have different, possibly more descriptive subscripts.

2) Beta frequencies in the cited work Ma et al are described as 15-25 Hz in the text, but in the methods the authors say they used 13-30 Hz. Why the difference?

3) Figure 7 still shows binary decisions that are not clearly relevant to real data. The authors responded by saying there are degrees of noise and amplitude modulation, but the figure is unchanged and, as written, is unclear how such binary decisions are made.

Further, the authors show that when phase confidence is high there is little difference between the methods, and the authors do not show which method is better when phase confidence is low, so it is unclear how the authors determining which method is best under what circumstances? An estimation of quality based on a ground truth analysis would be helpful.

Author Response

Synthesis of Reviews:

Computational Neuroscience Model Code Accessibility Comments for Author (Required):

N/A

Synthesis Statement for Author (Required):

Two reviewers read the revision of your manuscript “Different methods to estimate the phase of neural rhythms agree, but only during times of low uncertainty”. Both reviewers recognized that the manuscript has improved as many of the issues raised in the previous review were addressed, but critical questions remain about how to estimate the phase of neural activity. The major issue that the reviewers raise is that the present manuscript does not assess the validity of the phase measurements. This was the first question raised in the previous review (“What is the best strategy to estimate phase from neural data?”). This remains a central concern of the reviewers. Both considered a comparison of the quality of phase estimation from simulated data to be essential to properly evaluate the different methods. The reviewers recognized that the authors may not wish to make comparison of the phase extraction from different models and a ground truth, but nonetheless felt that this is a critical component to assess these different methods. Two other major issues were raised in the review process concerning filtering bandwidths and data inclusion/exclusion. These issues are detailed below, which you should find useful in revising your manuscript, if you decide to resubmit it to eNeuro or to another journal.

Major Issues:

Q1) What is the best strategy to estimate phase from neural data?

The authors have edited the manuscript to make it clear that the analysis is a comparison between phase extraction techniques without a comparison to the ground truth. But the reviewers feel that this is an essential component of the manuscript. The reasoning in the manuscript is that the simulations have no known phase, as phase is not an explicit parameter of the generative model. Nonetheless the reviewers noted that even though phase is not a parameter of the model, it can be extracted from the Hilbert-transformation of the signal before adding noise. If it is not possible to dissociate the signal and pink noise, one might consider an alternative approach to generating the signal. The difference between the original, noiseless phase, and the signal+noise phase is the difference in each method’s ability to estimate the signal of interest’s phase in the presence of noise. Which method does the best at recovering the signal’s phase in the presence of noise?

A1. We appreciate the reviewers’ recommendation that we include a case with a well-defined “true-phase”. First, we demonstrate for the reviewers the difficulty of estimating the true phase using the FIR-Hilbert method for the simulation “Broadband Rhythm in Pink Noise”. In this example, destructive interference between sinusoids in the broadband rhythm leads to intervals without a meaningful phase, when the amplitude of the broadband signal is low. These intervals are easy to detect in a broadband rhythm without noise, but more difficult to observe when we include additive pink noise. Please see the figures and captions included below for the reviewers.

Response Figure 1: An example simulated broadband rhythm and FIR-Hilbert derived phase estimate. The broadband rhythm without noise (blue trace) and phase estimated by the FIR-Hilbert method (orange trace). Observe that, during the low amplitude interval of voltage activity (approximately 5 s to 5.3 s), the phase derived from the FIR-Hilbert method continues to evolve from to .

Response Figure 2: An example broadband rhythm in pink noise and FIR-Hilbert phase estimate. Same as Response Figure 1, except the broadband rhythm (blue trace) now includes additive pink noise (gray trace). The inclusion of pink noise masks the low amplitude interval (5 s to 5.3 s) in the voltage trace, and complicates determining the accuracy of the FIR-Hilbert method (again estimated for the broadband rhythm without noise, as in Response Figure 1). Because of this noise, the FIR-Hilbert method derived phase estimate may look as though it captures the correct phase of the broadband rhythm with noise.

To further explore these observations, we have updated the manuscript to include a new simulation study consisting of amplitude modulated 6 Hz sinusoids with additive pink noise. We have updated the Results to include this new analysis:

“Amplitude Modulated Sinusoid in Pink Noise

To consider the scenario of a well-defined, ground truth phase, we simulate a 6 Hz sinusoid with amplitude modulation at a rate of 0.56 Hz, such that every 1.8 s the 6 Hz sinusoid emerges for 0.9 s (see spectrum to understand the signal-to-noise level in Figure 1a and example trace in Figure 1b). In this signal, a meaningful phase parameter exists only during times when the sinusoid is present. To this signal, we add pink noise (power proportional to 1/f1.5; Figure 1a) with random starting phases for the sinusoid at each frequency bin. We analyze 1000 instantiations of these simulated data, each of duration 10 s. This signal represents the burst-like dynamics of rhythms in brain electrophysiological time series.

We estimate the phase of the simulated 6 Hz rhythm using the Poincaré method, the FIR-Hilbert method, and the state space method (examples in Figure 1b). Visual inspection suggests that the latter two methods produce similar phase estimates when the sinusoid is present. However, during periods when the sinusoid is not present (red rectangular box in Figure 1b), the FIR-Hilbert method computes phase estimates whose trajectory completes full cycles (continuous traversal from to ), while phase estimates under the state space method tend to concentrate near or . Visual inspection shows that the phase from the Poincaré method is affected by the sudden zero crossings, which arise from the appreciable spectral power at frequencies above 6 Hz. This leads to rapid evolution of the phase, suggesting cycles that may be undesirable when the goal is to track the phase of the 6 Hz sinusoid. These qualitative observations are confirmed by direct comparison of the phase estimates to the true phase when the 6 Hz sinusoid is present (Figure 1c);the state space and FIR-Hilbert methods more accurately (median circular standard deviation of 20 degrees and 15 degrees, respectively) track the true phase compared to the Poincaré method (median circular standard deviation of 72 degrees).

The state space and FIR-Hilbert methods both support the calculation of metrics of uncertainty in the phase estimate - credible intervals for the state space method and confidence intervals for the FIR-Hilbert method (see Methods). To compare these metrics of uncertainty, we calculate the distributions of the widths of the credible and confidence intervals for time points when the sinusoid is, and is not, present (Figure 1d). We find that both measures of uncertainty unambiguously distinguish when a sinusoid is present (the uncertainty is low, FIR-Hilbert median confidence interval width of 10 degrees; and state space method median credible interval width of 13 degrees) versus when a sinusoid is absent (the uncertainty is high, FIR-Hilbert median confidence interval width of 54 degrees and state space method median credible interval width of 51 degrees).

Finally, as we lack knowledge of the true phase in real data and in more realistic simulations of neural activity, we compare the phase estimates between methods as a proxy to judge when multiple characteristics of the signal used to define phase are consistent. Comparisons between all three methods and their metrics of uncertainty reveals when meaningful estimates of phase can be calculated, i.e., when the sinusoid is present. For each pair of methods, we compute the difference between phases and construct the circular standard deviation histogram of the phase estimate differences across time points when the sinusoid is present and when it is absent. We find that when the sinusoid is absent there is a greater cross-method difference in phase estimates, consistent with greater discrepancy in how the phase is defined. We conclude that for an amplitude modulated sinusoid the state space and FIR-Hilbert methods more accurately recover the true phase compared to the Poincaré method.” Figure 1: FIR-Hilbert and state space methods more accurately recover true phase for an amplitude modulated sinusoid in pink noise. (a) Power spectrum of the background pink noise (red) and spectral peak at 6 Hz (blue) as well as harmonics from square-wave amplitude modulation. (b) (Top) Example trace of the 6 Hz amplitude modulated sinusoid (blue) and the summated data with pink noise (orange). Rectangles indicate periods with (green) and without (red) the sinusoid. (Bottom) Phase estimates from the Poincaré method (black), FIR-Hilbert method (mean estimate blue, confidence intervals shaded blue) and state space method (mean estimate red, credible intervals shaded red). (c) Distributions of differences in the estimated phase across 1000 simulations between each method and the true phase during times when the sinusoid is present. (d) Confidence and credible interval distributions across 1000 simulations when the sinusoid is present and absent. (e) Distributions of differences across 1000 simulations between methods when using time segments with or without the sinusoid. Estimates from all methods better align during segments of data when the sinusoid is present.

Q2) The authors did a thorough job exploring the space of different filtering techniques. The concern, however, regarded the filter bandwidth, which was not explored. How does the filter width, center frequency, and alignment with the instantaneous frequency of the signal relate to phase certainty using the different methods, and is there an optimal filter bandwidth for their different signals? If the filter is poorly aligned to the instantaneous frequency of the signal, is phase confidence low? Could one recover phase at those times by changing the filter bandwidth? A2. Thank you for raising these important questions. Indeed filters offer many parameters to vary, and the consequence of variations in these parameters on phase estimates is not always clear. We have updated the manuscript in two ways to address these questions.

First, for the FIR-Hilbert transform method, we derive intuition from the equation for the distribution of the estimated phase under the Hilbert transform (Eq 2 of the manuscript): where is the time-element of the amplitude time series, B is the bandwidth of the filter divided by the sampling frequency, and is the additive noise variance, which is estimated using the residual after removing the filtered data from the unfiltered data.

We have update the Methods to address questions related to the filter bandwidth as follows:

”... From the inferred confidence bounds, asymptotic expressions for the maximum likelihood estimate of phase can be derived, assuming that the amplitude of the signal is significantly higher than the standard deviation of the noise (for further discussion, see Lepage et al., 2013).

We note that the confidence interval for the phase estimate depends on the filter bandwidth ( ): the wider the bandwidth, the greater the variance (or lower the confidence) in the phase estimate (Eq 2). Similarly, a filter with center frequency misaligned with the center frequency of the true signal will increase variance in the phase estimate by increasing the additive noise variance ( , the inferred variance of the data after removing the filter-applied signal) and reducing the estimated amplitude time series ( , the amplitude time-series of the filter-applied signal). In this case, increasing the filter bandwidth ( ) may reduce variance in the phase estimate by reducing the additive noise variance ( ) and increasing the estimated amplitude time series ( ). Thus, characterizing the tradeoff between filter bandwidth ( ), and the additive noise variance ( ) and estimated amplitude time series ( ) depends on the specific signal of interest.”

Second, to investigate how a mismatch between the center frequency of the filter and the true signal impacts the FIR-Hilbert method, we have updated the Results to include a new simulation, as follows:

Figure 3: Different filter parameters minimally change FIR-Hilbert derived phase estimates. (a) Distributions of differences between Hilbert transform phase estimates with different filtering approaches (see legend) across 1000 simulations. The variability due to filtering approaches is less than the variability between methods. (b) Distributions of differences between phase estimation methods (i) for all data and (ii) when there is high confidence in the phase estimate.(c)Distributions of differences between Hilbert transform phase estimates with different central frequencies (+/- 2 Hz bandwidth, see legend) for (i) all data, and (ii) only data with high confidence in the phase estimate. Variability of phase estimates at different central frequencies rapidly increases away from the true central frequency (6 Hz).

Results:

”...Different filtering approaches with the same central frequency and bandwidth nonetheless produce different phase estimates, as expected (Fig 3a). However, the circular standard deviations between estimates from different filtering approaches are much smaller (median is less than 20 degrees in 15 out of 16 cases) than the deviation in phase estimates from the FIR-Hilbert method and either of the state space or Poincaré methods (Fig 3b.i). Because the filtering approaches focus on capturing a narrowband peak in the simulated data, we find that different filtering methods produce similar phase estimates, consistent with Sameni & Seraj, (2017) and Zrenner et al., (2020). Finally, we examine the impact of the center filter frequency on the FIR-Hilbert method. To do so, we adjust the center filter frequency between 3 Hz and 9 Hz, while keeping the bandwidth (4 Hz) and filter approach (finite impulse response) fixed. We find that, as the central frequency of the filter deviates from the correct central frequency, phase differences compared to the primary FIR-Hilbert approach rapidly increase (e.g., the phase difference between the 3 Hz filter and 6 Hz filter exceeds that between the FIR-Hilbert and Poincaré methods; Figure 3b.i, 3b.ii). This difference is reduced (by approximately 20 degrees) when examining the subset of data with confident phase estimates (i.e., when the FIR-Hilbert confidence intervals are small; Figure 3c.ii). We conclude that during periods of high confidence in the phase estimate (e.g., when the rhythm has higher amplitude, reducing the confidence interval width), the phase is consistently estimated (similar phase estimates across filters) even when the center filter frequency deviates from the true central frequency. ...”

Q3) What statistics should be used to exclude or include data?

Both high and low uncertainty data are equally valid. It appears that there are two hypotheses for why uncertainty changes:

a) When phase uncertainty is high, basically there is no delta or beta in the signal. By eliminating that data, the methods measure the true delta-beta coherence unfettered by all the time delta and beta that are not present.

b) Instead, there may be some latent delta or beta signal but removing that data wouldn’t impact the methods because while the delta-beta coherence is present, it is embedded in the noise.

These alternatives could be tested by building ground-truth simulations of the two alternatives and asking how the phase-uncertainty analysis changes the results, and does the real data fall under one hypothesis or the other. Finally, there should be a statistical analysis of the results after removing data based on phase uncertainty, or how changing the criterion for inclusion or exclusion impacts those results.

A3. We agree that it would be very useful to perform a statistical test - perhaps using the uncertainty in the phase - to distinguish the case of a latent oscillation from the case of an oscillation occurring in discrete bursts. We have included a new simulation (please see A1) to partially address this question. We show that, for a sinusoid in pink noise that appears intermittently as bursts - representing the idea that a delta/beta rhythm appears as bursts, we can identify times when the sinusoid is present using the credible/confidence intervals on the phase. Further development of this initial result (e.g., such as examining multiple bursting oscillators and testing the impact of phase uncertainty analysis on coherence or cross-frequency coupling) will be interesting to pursue, but beyond the scope of the current manuscript.

To perform a statistical analysis of the results after removing data based on phase uncertainty, we ran analyses of the in vivo data at different thresholds of confidence/credible intervals thus including/excluding different subsets of data. We have updated two Figures in the revised manuscript as follows:

Figure 7: Coherence estimates increase when incorporating a measure of uncertainty in the phase. (a) Power spectra from multitaper estimation for the two recordings analyzed. (Inset) Macaque brain showing location of recordings. (b) Example traces of data from the two recordings analyzed. Note the large amplitude, slow (∼0.5 Hz) oscillation. (c) Estimates of coherence (numerical value) and the phase differences between recordings (histograms) under different methods and when considering all data (upper row) and only the high confidence segments of data (lower row). A bimodal distribution of phase differences appears for the FIR-Hilbert and state space methods for the high confidence segments. (d) Coherence estimates from subsets of data based on thresholds computed from the credible/confidence intervals.

Figure 7: Cross-frequency coupling estimates are improved when incorporating a measure of uncertainty in the phase. (a) Example traces of the slow delta and beta signals showing timing of beta rhythms with specific phases of slow delta at each electrode location (inset image with arrow). Delta trace is colored according to beta amplitude (see colorbar). (b) Estimates of phase-amplitude coupling when using the state space method (red, shaded confidence bands) and FIR-Hilbert (blue, shaded confidence bands) method for each recording. Both methods demonstrate significant phase-amplitude coupling between slow delta phase and beta amplitude. When only using segments of data with high confidence phase (dashed curves), the modulation increases. (c) Cross-frequency coupling strength or ‘Modulation’ (the ratio between the minimum and maximum normalized mean amplitudes) estimated from subsets of data based on thresholds computed from the credible/confidence intervals.

We have edited the Results to refer to these figures as follows:

Results, Analysis of Coherence:

“We again find that the coherence increases from 0.15 for all data to 0.58 for the high confidence intervals of data (Figure 5c, right column). Examining the coherence estimates for subsets of data across a range of percentile thresholds for the credible/confidence intervals, we find that the coherence increases as the percentile threshold decreases; as confidence in the phase increases, so does the coherence. Because the coherence results can depend on the percentile threshold, we recommend reporting results across the range of percentile thresholds.”

Results, Analysis of Phase-Amplitude Coupling:

”... We observe this effect when analyzing all of the data (Figure 6), and find that considering only subsets of data with high confidence in the phase estimate increases the effect size (Figure 6c) - there is greater modulation (ratio between the maximum and minimum normalized mean amplitudes across all phase bins) of the amplitude of the beta band rhythm as a function of the slow delta phase when considering only intervals with high confidence phase estimates. However, for small percentile thresholds (i.e., when confidence in the phase estimate is high, and smaller subsets of data are used for analysis), we can visually identify fluctuations in the modulation as less data is included to create the phase-amplitude plot. Like the coherence results, we recommend reporting phase-amplitude coupling metrics across the range of percentile thresholds.. ...”

Additionally, we have edited the Discussion to caution against the use of a specific percentile threshold when performing these analyses:

“Similarly, the removal of data with low confidence should be done cautiously and with a thorough understanding of the possible implications on the overall conclusions, including voodoo correlations. Further, when presenting the results, it is important to clearly communicate the limitations of the analysis and the potential impact of the removed data on the conclusions drawn. We recommend that studies present the results of phase analysis across a continuum of thresholds on the uncertainty metrics to demonstrate the implications of analyzing different subsets of data (as shown in Figure 6d and 7c). Doing so may reveal facets of the data, for example, the presence of bi-modality in the phase differences, that may otherwise be obscured by inaccurate phase estimates (see Figure 6c). The true efficacy of a method to define phase that tracks the desired feature of population neural activity (such as the peaks or troughs) could be verified by causal approaches, such as stimulation.”

Minor Issues:

1) Equation 7, as written, will always equal 1. F1 and F2 are not the same in the numerator and denominator and should have different, possibly more descriptive subscripts.

The equation is written in set notation and is intended to say that all frequencies within the set [f1,f2] are included in the numerator and all frequencies not in the set [f1,f2] are in the denominator. We have edited the equation to make it clearer:, where is the frequency band of interest and are frequencies outside the frequency band of interest. is the spectral power across the frequency band as derived from a multitaper spectral estimator with 9 tapers using the mtspectrumc function from Chronux (Bokil et al., 2010). In this manuscript, the summation in the numerator includes frequencies from 1 Hz below to 1 Hz above the central frequency, and the summation in the denominator includes frequencies 2 Hz below and 2 Hz above the central frequency in the denominator.

2) Beta frequencies in the cited work Ma et al are described as 15-25 Hz in the text, but in the methods the authors say they used 13-30 Hz. Why the difference?

We used the classical definition of beta band to align to the traditional definition of beta Engel and Fries (2010). We’ve updated the manuscript as follows...

”... and phase-amplitude coupling between the slow delta [0.25, 1.5] Hz and beta [13, 30] Hz (defined from the classical definition provided in Engel and Fries, 2010) bands within each recording (Ma et al., 2019). ...”

3) Figure 7 still shows binary decisions that are not clearly relevant to real data. The authors responded by saying there are degrees of noise and amplitude modulation, but the figure is unchanged and, as written, is unclear how such binary decisions are made.

Further, the authors show that when phase confidence is high there is little difference between the methods, and the authors do not show which method is better when phase confidence is low, so it is unclear how the authors determining which method is best under what circumstances? An estimation of quality based on a ground truth analysis would be helpful.

We have now removed the figure and only retained the text in the Discussion to discuss this as a nominal decision making process. We hope doing so will offer practitioners a guide, and suggest potential simulations to test their expectations, without seeming to suggest that these decisions can be made based on clear quantitative metrics. As the reviewers suggest, these are qualitative metrics, as we do not have ground truth in data, and will depend on the model that practitioners have for the rhythms under analysis.

Additionally, when phase confidence is low, all methods provide distinct estimates of the phase and we cannot trust any one method to provide an accurate representation of the phase except by chance - when the generative model of the data is well captured by the model assumed by any method. We consider this issue in the Discussion section.

As recommended, we have now performed an analysis of an amplitude modulated narrowband oscillation in pink noise to demonstrate that applying phase uncertainty metrics and comparison across methods are reliable methods to identify moments when phase can be accurately estimated; please see A1.

Back to top

In this issue

eneuro: 10 (11)
eNeuro
Vol. 10, Issue 11
November 2023
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty
Anirudh Wodeyar, François A. Marshall, Catherine J. Chu, Uri T. Eden, Mark A. Kramer
eNeuro 13 October 2023, 10 (11) ENEURO.0507-22.2023; DOI: 10.1523/ENEURO.0507-22.2023

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Different Methods to Estimate the Phase of Neural Rhythms Agree But Only During Times of Low Uncertainty
Anirudh Wodeyar, François A. Marshall, Catherine J. Chu, Uri T. Eden, Mark A. Kramer
eNeuro 13 October 2023, 10 (11) ENEURO.0507-22.2023; DOI: 10.1523/ENEURO.0507-22.2023
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
    • Author Response
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • EEG
  • Hilbert transform
  • LFP
  • phase
  • rhythm
  • state space model

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • Development of a modified weight-drop apparatus for closed-skull, repetitive mild traumatic brain injuries in a mouse model
  • Combination of averaged bregma-interaural and electrophysiology-guided technique improves subthalamic nucleus targeting accuracy in rats
  • RealtimeDecoder: A fast software module for online clusterless decoding
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • Development of a modified weight-drop apparatus for closed-skull, repetitive mild traumatic brain injuries in a mouse model
  • Combination of averaged bregma-interaural and electrophysiology-guided technique improves subthalamic nucleus targeting accuracy in rats
  • RealtimeDecoder: A fast software module for online clusterless decoding
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.