Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro

eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleNew Research, Sensory and Motor Systems

Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues

Andrew S.K. Liu, Joji Tsunada, Joshua I. Gold and Yale E. Cohen
eNeuro 20 April 2015, 2 (2) ENEURO.0077-14.2015; DOI: https://doi.org/10.1523/ENEURO.0077-14.2015
Andrew S.K. Liu
1Bioengineering Graduate Group
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Andrew S.K. Liu
Joji Tsunada
2Department of Otorhinolaryngology, Perelman School of Medicine
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua I. Gold
3Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Joshua I. Gold
Yale E. Cohen
2Department of Otorhinolaryngology, Perelman School of Medicine
3Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania 19104
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Figure 1
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1

    Example auditory sequences characterized in terms of coherence and different interburst intervals. A, Frequency direction (i.e., increasing) is easiest to discriminate when the stimulus is 100% coherent. As coherence gets smaller (60% in B and 0% in C), frequency direction becomes more ambiguous and, thus, can lead to more errors. In A−C, we show increasing auditory sequences. Decreasing sequences are analogous but with negative coherence values. D shows a 100% coherence auditory sequence at three different IBIs: 10 ms (red), 60 ms (green), and 150 ms (blue).

  • Figure 2
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2

    Task design. A, For the RT task, subjects indicated their choice (i.e., sequence direction) any time after onset of the sequence. B, For the variable-duration task, subjects indicated their choice after sequence offset. C, For the hybrid task, subjects indicated sequence direction and whether they heard the sequence as one sound or discrete sounds after sequence offset in separate response periods. For all three tasks, subjects were provided feedback regarding frequency direction at the end of each trial (not shown).

  • Figure 3
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3

    Influence of IBI on reports of perceived grouping. Subjects reported whether they perceived the stimulus as one sound or discrete sounds. The graph shows proportion of trials in which each subject chose one sound as a function of IBI. The points indicate each subject’s performance. Each curve represents a logistic function that was fit to each subject’s reports across four sessions. The gray line indicates 50% of trials reported as “one sound”, which was IBI threshold. Colors represent different subjects.

  • Figure 4
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4

    Performance on the RT task. A, The fraction of trials in which subjects reported a sequence of tone bursts was increasing in frequency as a function of signed coherence and IBI. Positive coherence values indicate that the sequence was increasing in frequency; negative indicates decreasing. B, The mean RT (i.e., the time between sequence onset and button press) as a function of signed coherence and IBI on correct trials only (and all 0% coherence trials). C, As in B, but using signal RT (i.e., RT not including cumulative IBI). The solid curves are simultaneous fits of psychometric and chronometric data to a DDM (see Results). The psychometric data (A) only show the fit to the signal-RT data (C). In each panel, the points indicate performance data that was pooled across subjects. Colors represent different IBIs, as indicated in A.

  • Figure 5
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5

    Parameter values from fits of the basic DDM to the RT-task data. Each panel shows best-fitting values of bound height (A) or drift rate (B) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines/symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01, Bonferroni-corrected for three parameters).

  • Figure 6
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6

    LATER model fits to signal RT data. A, Distributions of signal RT from 0%-coherence trials for one subject are plotted on a reciprobit plot: reciprocal RT versus percentage of cumulative frequency on a probit scale (Reddi et al., 2003), separately per IBI. Best-fitting values of the bound height (B) and mean rate-of-rise (C) of the LATER model (see Results and Materials and Methods for details) are plotted as a function of IBI for each subject and coherence (black/gray lines and data points). The data in black indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01, Bonferroni-corrected for two parameters). Shaded lines/symbols indicate that the model fits were not improved significantly. Red data points/lines represent the median values across all conditions.

  • Figure 7
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7

    Psychophysical kernels. Kernels represent the average of the stimulus sequence (mean subtracted on each trial) presented on all 0%-coherence trials across subjects for increasing (red) and decreasing (blue) choices. A−C, Kernels computed per IBI condition, as indicated, and smoothed using a 21-sample moving mean. D, E, Kernels for the 60 ms IBI condition when subjects were told to emphasize accuracy or speed, respectively. Thick and broken lines are mean and standard error, respectively. Data are aligned relative to onset of the sequence. Asterisks indicate that the increasing and decreasing kernels were significantly different from one another for a given time bin (Mann−Whitney test, p < 0.05p, using the raw, unsmoothed kernels). Black lines indicate best-fitting simulated kernels (see text for details). Solid vertical lines indicate median RT. Dashed vertical lines indicate the end of the integration time from the best-fitting simulated kernels.

  • Figure 8
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8

    Performance on the variable-duration task. A−C, Psychometric data are plotted as a function of listening duration for different coherences and IBIs, as indicated. Each data point reflects mean performance for all five subjects as a function of coherence and signal time (plotted in 0.2 s bins, up to 1.0 s, but fit using unbinned data). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. D, E, Best-fitting values of drift rate (D) and accumulation leak (E) plotted as a function of IBI for fits to data from individual subjects (black) or combined across all subjects (red). Dark lines and symbols indicate that the model fits were improved significantly by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p < 0.01q, Bonferroni-corrected for two parameters). Shaded lines and symbols indicate that the model fits were not improved significantly.

  • Figure 9
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9

    Performance on the hybrid task. A, B, Psychometric data are plotted as a function of listening duration for trials in which the subject reported that the sequence was one sound (A) or a series of discrete sounds (B). The solid curves are fits from the best-fitting model with two parameters: drift rate and accumulation leak. C, D, Best-fitting values of drift rate (C) and accumulation leak (D) plotted as a function of perceptual grouping for fits to data from individual subjects (black) or combined across all subjects (red). The model fits were not improved by fitting the given parameter separately for each IBI condition (likelihood-ratio test, p > 0.1r in all cases, Bonferroni-corrected for two parameters).

Tables

  • Figures
    • View popup
    Table 1

    Statistical table

    Data structureStatistical testPower
    aNormal distributionPearson Correlationp < 0.01
    bNormal distributionLikelihood-ratio test; Bonferroni-corrected for three parametersp < 0.001
    cNormal distributionLikelihood-ratio test; Bonferroni-corrected for three parametersp < 0.01
    dNormal distributionLikelihood-ratio testp > 0.24
    eNormal distributionLikelihood-ratio testp > 0.1
    fNormality not assumedMann−Whitney testp < 0.01
    gNormality not assumedKruskal−Wallis testp < 0.001
    hNormal distributionLikelihood-ratio test, Bonferroni-corrected for two parametersp < 0.01
    iNormality not assumedKruskal−Wallis testp > 0.05
    jNormal distributionLikelihood-ratio test, Bonferroni-corrected for two parametersp < 0.01
    kNormal distributionLikelihood-ratio testp = 0.2838
    lNormality not assumedKruskal−Wallis testp = 0.011
    mNormality not assumedKruskal−Wallis testp = 0.125
    nNormal distributionLikelihood-ratio testp > 0.05
    oNormality not assumedKruskal−Wallis testp > 0.05
    pNormality not assumedMann−Whitney testp < 0.05
    qNormal distributionLikelihood-ratio test, Bonferroni-corrected for two parametersp < 0.01
    rNormal distributionLikelihood-ratio test, Bonferroni-corrected for two parametersp < 0.01
Back to top

In this issue

eneuro: 2 (2)
eNeuro
Vol. 2, Issue 2
March/April 2015
  • Table of Contents
  • Index by author
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues
Andrew S.K. Liu, Joji Tsunada, Joshua I. Gold, Yale E. Cohen
eNeuro 20 April 2015, 2 (2) ENEURO.0077-14.2015; DOI: 10.1523/ENEURO.0077-14.2015

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
Temporal Integration of Auditory Information Is Invariant to Temporal Grouping Cues
Andrew S.K. Liu, Joji Tsunada, Joshua I. Gold, Yale E. Cohen
eNeuro 20 April 2015, 2 (2) ENEURO.0077-14.2015; DOI: 10.1523/ENEURO.0077-14.2015
del.icio.us logo Digg logo Reddit logo Twitter logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • auditory cortex
  • decision-making
  • scene analysis
  • Speed-Accuracy Tradeoff
  • streaming

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

New Research

  • Postnatal development of projections of the postrhinal cortex to the entorhinal cortex in the rat
  • Visual System Hyperexcitability and Compromised V1 Receptive Field Properties in Early-Stage Retinitis Pigmentosa in Mice
  • Enhancement of motor cortical gamma oscillations and sniffing activity by medial forebrain bundle stimulation precedes locomotion
Show more New Research

Sensory and Motor Systems

  • Optoception: perception of optogenetic brain perturbations
  • Attention cueing in rivalry: insights from pupillometry
  • Evidence for Independent Processing of Shape by Vision and Touch
Show more Sensory and Motor Systems

Subjects

  • Sensory and Motor Systems

  • Home
  • Alerts
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Policy
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2022 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.