Non-linguistic auditory processing in stuttering: Evidence from behavior and event-related brain potentials

https://doi.org/10.1016/j.jfludis.2008.08.001Get rights and content

Abstract

Auditory processing deficits are hypothesized to play a role in the disorder of stuttering (e.g. Hall, J. W., & Jerger, J. (1978). Central auditory function in stutterers. Journal of Speech and Hearing Research, 21, 324–337). The current study focused on non-linguistic auditory processing without verbal responses to explore the relationship between behavior and neural activity in the absence of cognitive demands related to language processing and articulatory planning for speaking. A pure-tone, oddball paradigm was utilized to compare behavioral accuracy and reaction times for adults who stutter (AWS) and normally fluent speakers (NFS). Additionally, event-related potentials elicited by brief standard and target tones were compared for the two groups. Results revealed that, as a group, AWS tended to perform less accurately compared to the NFS and were slower to respond to target stimuli. However, inspection of individual data indicated that most of the AWS performed within the range of normally fluent speakers while a small subset of AWS were well outside the normal range. This subgroup of AWS also demonstrated early perceptual processes (as indexed by N100 and P200 amplitudes) indicative of reduced cortical representation of auditory input. The P300 mean amplitudes elicited in AWS tended to be reduced overall compared to those of the NFS, suggesting the possibility of weaker updates in working memory for representations of the target tone stimuli in AWS. Taken together, these findings point to the possibility that a subset of AWS exhibit non-linguistic auditory processing deficits related to altered cortical processing.

Educational objectives: After reading this article, the reader will be able to: (1) summarize research findings of non-linguistic auditory processing in stuttering; (2) discuss the relationship between behavioral performance for auditory processing and the underlying event-related brain potentials; (3) discuss the importance of analyses of individual versus group data in stuttering; and (4) summarize how the findings of this study relate to a multifactorial model of stuttering.

Introduction

Stuttering is an involuntary disruption in fluency characterized by abnormal frequency or duration of interruptions in the flow of speech, namely repetitions, prolongations, and blocks (Guitar, 1998). The cause of stuttering is unknown, though a variety of etiologies have been proposed (for complete review of proposed etiological theories, see Bloodstein, 1995; Curlee & Siegel, 1997; Manning, 2001). Many current models of stuttering incorporate atypical neurophysiology, genetic factors, a person’s environment, personality, learning ability, auditory processing, language processing, and speech production (Bloodstein, 1995, Guitar, 1998; Lawrence & Barclay, 1998). Smith and Kelly (1997) propose a non-linear multifactorial model of stuttering, which incorporates the complex relationship of many factors that can influence stuttering and their compounded and interactive effects on the speech motor system. It is hypothesized that the contribution level of each factor determines distinctive behavior patterns that emerge among individuals (Smith, 1990). The present experiment focuses on one factor hypothesized to play a role in stuttering, auditory processing.

Auditory processing is a single factor among many potential contributing variables associated with stuttering (Hall & Jerger, 1978; Rosenfield & Jerger, 1984). Since the 1950s, clinicians have used altered auditory input, such as delayed auditory feedback (DAF), to reduce stuttering, with results suggesting that auditory processing and feedback may play important roles in stuttering (Rosenfield & Jerger, 1984). However, it was not clear from initial studies whether the fluency-enhancing effects of DAF were due to altered input or the resulting slower rate of speech. Later studies revealed that even under rapid speaking conditions, delayed auditory feedback and frequency-altered feedback continued to enhance fluency in adults who stutter (AWS), indicating that auditory processing and feedback, not simply slower speech rate, play a role in enhanced fluency (Kalinowski, Armson, Roland-Mieszkowski, Stuart, & Gracco, 1993).

The effects of altered feedback on fluency in persons who stutter have led to investigations of the auditory system for language processing and its function in stuttering. Studies of auditory language perception have indicated differences in linguistic processing in stuttering. A magnetoencephalography (MEG) study of auditory cortical potentials examined neural responses to auditory speech stimuli in AWS (Biermann-Ruben, Salmelin, & Schnitzler, 2005). For the single word and sentence task, cortical potentials were acquired during the auditory presentation, then participants repeated the words or repeated or rephrased the sentences. Compared to the activation patterns observed in the normally fluent speakers (NFS), the AWS exhibited additional activation in the left inferior frontal and right rolandic regions of the brain in response to auditory single word and sentence stimuli. These findings indicate that neural activations differ in adults who stutter during speech perception; however, it is not clear from this study whether these activation patterns were influenced by the speaking task that followed the auditory stimuli. Weber-Fox and Hampton (2008) presented natural speech sentences with syntactic and semantic processing constraints without any speaking requirement. Findings revealed that NFS exhibited typical N400 and P600 event-related brain potentials (ERPs) differentially to the semantic and verb-agreement violations respectively. In contrast, the AWS exhibited N400–P600 biphasic patterns elicited by both the semantic and verb-agreement conditions. These findings indicated atypical processing of auditory linguistic information in AWS despite no overt speech requirements and the fact that they scored within the normal range on formal tests of language performance. Therefore, MEG and electrophysiological evidence suggests that language perception in the auditory modality may be atypical in AWS; however, it is unknown whether these differences in neural activation may be attributed entirely to language-specific processing differences or whether differences in earlier perceptual processing in the auditory system may also play a role.

Existing literature does not provide clear evidence about the role of the non-linguistic auditory processing in stuttering. As summarized below, the findings from previously published studies of auditory function in stuttering utilizing behavioral and electrophysiological measures of brainstem and cortical responses elicited by non-linguistic, acoustic stimuli are mixed.

Hall and Jerger (1978) investigated central auditory function in stuttering, comparing audiometric linguistic and non-linguistic procedures (e.g. acoustic reflexes and Synthetic Sentence Identification (SSI) test and Staggered Spondaic Word test performance). The AWS demonstrated reduced performance on the SSI-Ipsilateral Competing Message test and the Staggered Spondaic Word test compared to the NFS, suggesting that AWS exhibit subtle central auditory function deficits at the level of the brainstem (Hall & Jerger, 1978). However, in a subsequent study, Hannley and Dorman (1982) used similar measures and found no differences between AWS and NFS. Kramer, Green, and Guitar (1987) reported that response accuracy to pure-tone, masking level difference tasks was reduced in AWS compared to normally fluent speakers. In contrast, Blood (1996) found no accuracy differences between the two groups on these tasks. Kramer et al. (1987) have suggested that inconsistent findings among these studies may be due to a continuum of auditory processing deficits in adults who stutter.

Building on behavioral investigations of central auditory function, researchers tested auditory brainstem responses (ABRs) in persons who stutter. A few studies reported differences in ABRs, such as prolonged wave latencies, in AWS compared to NFS (Blood & Blood, 1984; Smith, Blood, & Blood, 1990). However, other studies have demonstrated that ABRs in AWS are comparable to those in NFS (Decker, Healey, & Howe, 1982; Newman, Bunderson, & Brey, 1985; Stager, 1990). These conflicting findings have rendered it difficult to understand the involvement of auditory function for non-linguistic processing in stuttering.

Salmelin et al. (1998) utilized MEG to investigate cortical potentials elicited by pure-tones presented during speech in AWS and NFS. The AWS exhibited atypical organization of auditory processing regions of the cortex compared to the NFS. Interhemispheric balance in the adults who stutter was altered during speech production, with changes occurring primarily in the left auditory cortex (Salmelin et al., 1998). However, these findings occurred during an overt speech task, making it difficult to know if results were solely due to processing of the pure-tones or if the speech-motor production task might have also played a role in the atypical cortical activations. In a follow-up study, Biermann-Ruben et al. (2005) examined early cortical processing (N100m) in response to 1000 Hz tones and found no differences between AWS and NFS. However, in this study, only one tone frequency was presented, behavioral measures related to the auditory processing were not obtained, and measures of only the first 100 ms of the neural responses were reported. In another study of cortical activity elicited by non-linguistic auditory stimuli, Morgan, Cranford, and Burke (1997) measured ERPs in response to a pure-tone “oddball” paradigm. Their results indicated that fluent speakers exhibited larger P300 amplitudes in the right compared to the left hemisphere. However, in 5 out of the 8 AWS, P300 amplitudes were larger in the left hemisphere, suggesting auditory processing differences in AWS for non-linguistic pure-tone stimuli (Morgan et al., 1997). The findings of this study were limited to only two electrode sites (C3/4), and further, Morgan and colleagues did not report findings for earlier cortical potentials (N100, P200) or for behavioral performance measures.

The current study also focuses on non-linguistic auditory processing to determine whether basic auditory function is atypical in stuttering in the absence of language processing demands. The current experiment extends previous research by examining early perceptual processing ERPs, the N100 and P200, as well as a later, cognitive potential, the P300. Additionally, this design varies interstimulus intervals (ISIs), allowing for examination of rapid successive firing of neurons to determine whether the reduction in neuronal recovery time differs for AWS. Further, measures of behavioral responses of accuracy and reaction time are included to help constrain the interpretation of the ERP findings. Finally, previous findings have reported diversity in auditory processing among persons who stutter (e.g. Kramer et al., 1987, Stager, 1990). Therefore, the current study examines the correlation between behavioral performance and ERPs for individuals in addition to investigating the group comparisons. The information obtained from the dependent measures is briefly summarized below.

The N100 and P200 are event-related brain potentials evoked by a range of auditory stimuli, including clicks, speech, and abrupt changes in a continuous sound (Hyde, 1997; Rohrbaugh, Parasuraman, & Johnson, 1990). The N100 typically occurs about 100 ms post-stimulus onset with the P200 emerging between 175 ms and 225 ms post-onset (Hyde, 1997). The N100 and P200 are exogenous potentials, dependent on external factors, and are sensitive to the physical parameters of auditory stimuli (Luck, 2005). Investigation of these potentials can increase understanding of pre-cognitive processes elicited by non-linguistic auditory tones.

The P300 is a complex ERP component occurring between 300 ms and 600 ms post-stimulus onset in response to a low probability stimulus, often elicited by a simple oddball paradigm (Polich & Kok, 1995). P300 amplitudes are related to factors such as stimulus probability, quality, and duration, and are associated with attention and task relevance; ignored stimuli typically do not elicit a P300 (Rohrbaugh et al., 1990).

Length of interstimulus interval plays a role in determining P300 amplitudes, with longer ISIs eliciting larger amplitude waves (Polich, 1990). It is hypothesized that the P300 may be related to the psychological relative refractory period; the time between two split-second decisions made in a row (Neville, Coffey, Holcomb, & Tallal, 1993; Polich, 1990, Rohrbaugh et al., 1990). The amplitude of the P300 is affected by attention and memory load and is subject to habituation and reduction in repetitive tasks, indicating that the P300 may reflect neuronal activity related to focusing on new and/or novel information and updating working memory (Knight & Nakada, 1998; Polich & Kok, 1995). The P300 is an endogenous potential, dependent on internal factors, and can provide insight into underlying cognitive processes (Luck, 2005). The variety of results in the literature suggests that the P300 may be a summation of numerous events in the specified time window contributing collectively to the formation of the waveform (Polich & Kok, 1995).

Rapid interstimulus intervals influence ERP amplitude due to the recovery cycle of neurons. ERPs are generated by large groups of neurons firing in synchrony (Nunez, 1995). If a subpopulation of the neurons has not recovered from the previous firing, the ERP component will be reduced in amplitude. For this reason, short interstimulus intervals tend to suppress ERP amplitude (Polich, 1990). ISI can have an effect on many ERP components, including the N100–P200 complex and the P300 (Neville et al., 1993, Polich, 1990).

Measures of behavioral accuracy indicate processing precision and reaction times index speed of processing. By measuring these in conjunction with ERPs, we are able to better differentiate whether accuracy in target tone detection or slowed reaction times are due to differences in early perceptual processing or in later stages of cognitive processing.

The purpose of the current experiment was to extend the findings of earlier work to better understand the potential role of non-linguistic auditory processing in stuttering. Using a pure-tone oddball paradigm with a short and a long interstimulus interval (ISI), we examined behavioral performance as well as the latencies and amplitudes of both early perceptual (N100, P200) and later cognitive cortical potentials (P300) in AWS and NFS. Measures of the relationship between behavior and neural activity provide a better understanding of the function of the auditory system in these two groups.

Section snippets

Participants

Participants were 11 adults who stutter (AWS) and 11 normally fluent speakers (NFS) who served as control participants. Participants ranged from 18 to 62 years old with 8 males and 3 females in each group. NFS were age- (±2 years), gender-, and education-matched (±2 years) with each AWS (Table 1). Ten participants were right-handed and one was left-handed in each group as determined by the Edinburgh Inventory for Assessment of Handedness (Oldfield, 1971). All participants had normal or

Accuracy

The NFS tended to perform with higher accuracy on the target tone detection task than the AWS, with the mean (SE) percent correct of 94.85 (2.01) for the NFS and 85.69 (4.24) for the AWS, F(1, 20) = 3.82, p = .065, ηp2=.160, power = .622. Fig. 1 (top panel) further illustrates the accuracy results, comparing the percent correct obtained by the NFS and the AWS for the two ISI conditions.

Reaction time

NFS displayed reaction times that were on average 57 ms faster compared to AWS for detecting target tones, F(1, 20) = 

Discussion

We examined non-linguistic, pure-tone auditory processing in AWS and NFS to determine whether behavioral and electrophysiological measures indicated auditory processing deficits in AWS. This study was designed to further our understanding about the potential role of non-linguistic auditory processing in stuttering. Our findings, which suggest that non-linguistic auditory processing is deficient in a small subset of AWS, have implications for theories of stuttering as discussed below.

Acknowledgements

We wish to thank John Spruill, Natalya Kaganovich, Kristel Kubart, and Bharath Chandrasekaran for their insights and helpful comments on this manuscript. This work was funded by a grant from the National Institutes on Deafness and other Communication Disorders (DC00559).

Amanda Hampton, MS, is a speech-language pathologist and a doctoral student in Speech, Language, and Hearing Science at Purdue University. Her research interests include fluency and language disorders, and specifically, neural systems and activations underlying these disorders.

References (39)

  • S.V. Stager

    Heterogeneity in stuttering: Results from auditory brainstem response testing

    Journal of Fluency Disorders

    (1990)
  • American Electroencephalographic Society

    Guideline thirteen: Guidelines for standard electrode placement nomenclature

    Journal of Clinical Neurophysiology

    (1994)
  • I.M. Blood

    Disruptions in auditory and temporal processing in adults who stutter

    Perceptual & Motor Skills

    (1996)
  • I.M. Blood et al.

    Relationship between stuttering severity and brainstem-evoked response testing

    Perceptual & Motor Skills

    (1984)
  • O. Bloodstein

    A handbook on stuttering

    (1995)
  • E. Cuadrado et al.

    Atypical syntactic processing in individuals who stutter: Evidence from event-related brain potentials and behavioral measures

    Journal of Speech, Language, and Hearing Research

    (2003)
  • B. Guitar

    Stuttering: An integrated approach to its nature and treatment

    (1998)
  • J.W. Hall et al.

    Central auditory function in stutterers

    Journal of Speech and Hearing Research

    (1978)
  • Cited by (44)

    • Electrophysiological correlates of stuttering severity: An ERP study

      2022, Journal of Clinical Neuroscience
      Citation Excerpt :

      The result of this study showed that the relationship between response accuracy and the stuttering severity and also the one between the reaction time and the severity of stuttering were significant. In a similar study, Hampton and Weber-Fox (2008) reported that in a pure tone, oddball paradigm, PWS tended to perform less accurately in comparison with the normally fluent speakers, and their reaction time was slower in response to auditory stimuli [10]. According to their and our results, we can say that the different performances in the response accuracy and reaction time exist not only between PWS and PWNS but also between people with different levels of stuttering severity.

    • Impact of auditory feedback alterations in individuals with stuttering

      2021, Brazilian Journal of Otorhinolaryngology
      Citation Excerpt :

      This finding was consistent with a study by Altrows and Bryden,24 In which the authors reported that, when using masking, the most severe stutterers tended to show more efficacy in increasing fluency than the stutterers with other types of severity. The findings of this study support the fact that the increased fluency of individuals who stutter under the masking auditory feedback is not only associated with a reduced speech rate, but also as a consequence of the modified auditory input.26,27,29 Therefore, it is assumed that auditory input processing may be different in individuals who stutter, when compared to fluent individuals.47

    • A systematic literature review of neuroimaging research on developmental stuttering between 1995 and 2016

      2018, Journal of Fluency Disorders
      Citation Excerpt :

      Because EEG is also very sensitive to muscle artifacts, recordings during continuous speech are difficult; consequently, the majority of EEG studies have focused on auditory and/​or linguistic processing whilst others have focused on preparatory speech activity. There have been 25 EEG studies relating to speech production and auditory processing in AWS, some combined with adolescent CWS (Achim, Braun, & Collin, 2008; Andrade, Sassi, Matas, Neves, & Martins, 2007; Angrisani, Matas, Neves, Sassi, & Andrade, 2009; Arnstein, Lakey, Compton, & Kleinow, 2011; Corbera, Corral, Escera, & Idiazábal, 2005; Cuadrado & Weber-Fox, 2003; Daliri & Max, 2015; Dietrich, Barry, & Parker, 1995; Hampton & Weber-Fox, 2008; Joos, De Ridder, Boey, & Vanneste, 2014; Khedr, El-Nasser, Abdel Haleem, Bakr, & Trakhan, 2000; Liotti et al., 2010; Maxfield, Huffman, Frisch, & Hinckley, 2010; Maxfield, Pizon-Moore, Frisch, & Constantine, 2012; Maxfield, Morris, Frisch, Morphew, & Constantine, 2015; Mock, Foundas, & Golob, 2015; Morgan, Cranford, & Burk, 1997; Murase, Kawashima, Satake, & Era, 2016; Rastatter, Stuart, & Kalinowski, 1998;Sassi, Matas, de Mendonça, & de Andrade, 2011; Tahaei, Ashayeri, Pourbakht, & Kamali, 2014; Vanhoutte et al., 2015; Weber-Fox & Hampton, 2008; Weber-Fox, Spencer, Spruill, & Smith, 2004; Weber-Fox, 2001). Additionally, there have been 9 EEG studies in CWS (Arnold, Conture, Key, & Walden, 2011; Jansson-Verkasalo et al., 2014; Kaganovich, Wray, & Weber-Fox, 2010; Mohan & Weber, 2015; Özcan, Altınayar, Özcan, Ünal, & Karlıdağ, 2009; Özge, Toros, & Çömelekoğlu, 2004; Usler & Weber-Fox, 2015; Weber-Fox &Hampton, 2008; Weber-Fox, Wray, & Arnold, 2013).

    • EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults

      2017, NeuroImage
      Citation Excerpt :

      However, PWS displayed poorer discrimination accuracy in tone conditions, though only accuracy in the TQ condition was significantly lower. Analogous findings by Hampton and Weber-Fox (2008) reported differences in discrimination accuracy for non-speech stimuli in children who stutter versus their matched controls. As accuracy levels were similar and only accurate discriminations were considered in the current study, the EEG data are not interpreted in light of the behavioral differences.

    View all citing articles on Scopus

    Amanda Hampton, MS, is a speech-language pathologist and a doctoral student in Speech, Language, and Hearing Science at Purdue University. Her research interests include fluency and language disorders, and specifically, neural systems and activations underlying these disorders.

    Christine Weber-Fox received her Ph.D. from Purdue University in 1989. In 1994 she completed a post-doctoral fellowship in cognitive neuroscience at the Salk Institute. She is currently an Associate Professor in the Department of Speech, Language, and Hearing Sciences at Purdue University. She studies language processing in normal and disordered development.

    View full text