Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT

User menu

Search

  • Advanced search
eNeuro
eNeuro

Advanced Search

 

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Blog
    • Collections
    • Podcast
  • TOPICS
    • Cognition and Behavior
    • Development
    • Disorders of the Nervous System
    • History, Teaching and Public Awareness
    • Integrative Systems
    • Neuronal Excitability
    • Novel Tools and Methods
    • Sensory and Motor Systems
  • ALERTS
  • FOR AUTHORS
  • ABOUT
    • Overview
    • Editorial Board
    • For the Media
    • Privacy Policy
    • Contact Us
    • Feedback
  • SUBMIT
PreviousNext
Research ArticleResearch Article: Methods/New Tools, Novel Tools and Methods

A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data

Tyler S. Manning, Benjamin N. Naecker, Iona R. McLean, Bas Rokers, Jonathan W. Pillow and Emily A. Cooper
eNeuro 31 October 2022, 10 (1) ENEURO.0144-22.2022; https://doi.org/10.1523/ENEURO.0144-22.2022
Tyler S. Manning
1Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Tyler S. Manning
Benjamin N. Naecker
2Psychology, University of Texas at Austin, Austin, TX 78712
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Iona R. McLean
1Herbert Wertheim School of Optometry and Vision Science, University of California, Berkeley, Berkeley, CA 94720
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Bas Rokers
3Psychology, New York University–Abu Dhabi, Abu Dhabi, United Arab Emirates
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Bas Rokers
Jonathan W. Pillow
4Princeton Neuroscience Institute, Department of Psychology, Princeton University, Princeton, NJ 08540
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Emily A. Cooper
5Herbert Wertheim School of Optometry and Vision Science, Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Emily A. Cooper
  • Article
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF
Loading

Article Figures & Data

Figures

  • Tables
  • Extended Data
  • Figure 1.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 1.

    Canonical Bayesian computation. This figure illustrates Bayes’ rule, by which a posterior is the product of a prior (the observer’s knowledge of the probability of encountering the stimulus) and a likelihood (the set of stimulus values associated with a given a measurement). The posterior is scaled by the inverse of the marginal likelihood. Toolkit script: Fig1_BayesianDemo.m.

  • Figure 2.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 2.

    The canonical Bayesian computation as in Figure 1 but expanded to a set of likelihood functions. The prior (A) is multiplied by the likelihood defined by a given measurement (B, shown for m1 and m2) to obtain the posterior (C). Note that the shape of the posteriors change for different likelihoods since the prior is non-Gaussian, but the posteriors are overall drawn to the largest probability region of the prior. In each panel, the heat map values represent probability with higher intensity mapping to higher probability. Identity lines are indicated with dashed black lines. Toolkit script: Fig2_2DBayesianDemo.m.

  • Figure 3.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 3.

    The distribution of sensory estimates arises from the variability in the measurement values about the expected value across trials (EV; i.e., the true stimulus value). A, On a given trial, a likelihood is defined around the observed measurement. Here, we plot the expected value of this likelihood for a given true stimulus, as well as other possible likelihoods that occur on a set of trials. The prior is shown for reference. The upward arrow indicates the true stimulus that used to generate the likelihood. B, The resulting posteriors for each trial are shown, along with downward arrows indicating the estimates ({x^} ) derived from these posteriors. C, Over many trials, these estimates (now indicated as upward arrows) create an estimate distribution, which can be predicted for a Bayesian ideal observer with a given prior and amount of sensory noise. When the prior is Gaussian, there is a closed form expression for this distribution. Toolkit script: Fig3_EstimateDistribution.m.

  • Figure 4.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 4.

    Graphical illustration of computing the observer’s psychometric curve for a 2AFC task. A, Computing a single point on the psychometric curve when x1=x2=3 for measurement noise variances σ12=0.75,σ22=0.5 and a prior with v = 0 and γ = 1.5. Dashed line (top) shows the measurement distribution p(m1|x1) and solid line (right) shows measurements distribution p(m|x2). The 2D grayscale image shows the joint distribution of observer measurements given the stimuli x1 and x2, formed by the product of the two measurement distributions along the top and right. The white diagonal line is the observer’s decision boundary, corresponding to measurement values for which the inferred speeds are equal. The probability that the observer reports “yes” (i.e., that x2 exceeded x1) is the area above the decision boundary (point “A” in panel D). B, Same as panel A but with equal noise variances σm12=σm22=0.64 . C, Same as panel A but with noise variances σm12=0.5,σm22=0.75 . D, Full psychometric curves for the noise variances used in panels A–C, showing the probability that the observer reports “yes” as a function of the stimulus x2. The points labeled A, B, C represent the sum of the probability above the diagonal in panels A–C.

  • Figure 5.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 5.

    Prior and posterior defined by a mixture of Gaussian components. A, The prior of a Bayesian observer (dark red line) can be modeled as a mixture of Gaussian components (light red lines). B, When combined with a Gaussian likelihood, the resulting posterior is also a mixture of Gaussians. Similar to the posterior resulting from a single Gaussian prior, the mixture of Gaussians posterior is biased relative to the likelihood. Likelihoods are shaded here for visual clarity. Toolkit script: Fig5_MoGprior.m.

  • Figure 6.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 6.

    An extension of Figure 4 to a long-tailed prior defined by a mixture of Gaussians (γ1=2,γ2=0.6 ν1=ν2=0,w1=w2=0.5 ), similar in appearance to the prior in Figure 7A. Here, the decision boundary representing T(m1,m2) is nonlinear because the different components of the prior have different levels of influence on the percept as m varies. A, As in Figure 4, the 2D grayscale image shows the joint distribution of the observer measurements given the stimuli x1 and x2, formed by the product of the two measurement distributions along the top and right. The white line is the observer's decision boundary. Here, x1 = x2 = 3 for measurement noise variances σ21 = 0.75, σ21 = 0.5. B, Same as panel A, but with equal noise variances σ2m1 = σ2m2 = 0.64. C, Same as panel A, but with measurement noise variances σ21 = 0.75, σ21 = 0.5. Toolkit script: Fig6_MoGGauss_graphicalDemo.m. D, Full psychometric curves for the noise variances used in panels A–C, showing the probability that the observer reports “yes” as a function of the stimulus X2. The points labeled A, B, C represent the sum of the probability above the diagonal in panels A–C.

  • Figure 7.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 7.

    Two example methods for reducing the number of parameters to optimize when inferring an observer’s prior. A, A leptokurtotic prior centered on zero formed by a mixture of zero-mean Gaussian components. B, A skewed prior formed by a mixture of Gaussian components with fixed positions and widths. Toolkit script: Fig7_MoGConstrainedFitting.m.

  • Figure 8.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 8.

    Mixture of Gaussians model fitting to non-Gaussian priors. A, Inferring the shape of a Cauchy prior from a set of 1000 point estimates. Top left, True prior in red and inferred prior in dashed black. Bottom left, The same, but on a semilog axis. Top right, Posteriors for a set of stimuli and measurements, as well xBLS for each posterior (green line). Bottom right, Set of posteriors and xBLS inferred from the data using the mixture of Gaussians model. B, Inferring the shape of a bimodal prior from a set of 1000 point estimates. Conventions are the same as in panel A. A slight gamma correction has been applied to the set of posteriors shown in the 2D plots for visibility. Toolkit scripts: Fig8_MoGtoNonGauss.m and Fig8_MoGtoNonGauss2.m for panels A and B, respectively.

  • Figure 9.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 9.

    Performance of approximations for fitting heavy-tailed priors. A, Diagram illustrating the pipeline for comparing the mixture of Gaussians (MoG) approximation and a single Gaussian (SG) to a full numerical evaluation of two-alternative forced choice data generated with a MoG prior. B, C, Scatter plots illustrate the relationship between the numerical evaluation of the MoG prior model and the SG and approximate MoG approaches. Black circles indicate the points corresponding to the estimated psychometric function values shown in panel A for the SG and MoG approximations. D, Square root of the mean squared error (RMS error) for the MoG analytical approximation and the single Gaussian approximation, summarized over 20 bins of the numerical data. E, Mean signed error distributions for both approximations. Note that axis ranges are set to match Figure 10 for comparison. Toolkit script: Fig9_MoGErrorAnalysis.m.

  • Figure 10.
    • Download figure
    • Open in new tab
    • Download powerpoint
    Figure 10.

    Performance of the analytical approximation in fitting bimodal priors. A, Diagram illustrating the pipeline for comparing the mixture of Gaussians (MoG) approximation and a single Gaussian (SG) before a full numerical evaluation of two-alternative forced choice data generated with a MoG prior. B, C, Scatter plots illustrate the relationship between the numerical evaluation of the MoG prior model and the SG and approximate MoG approaches. Black circles indicate the points corresponding to the estimated psychometric function values shown in panel A for the SG and MoG approximations. D, Square root of the mean squared error (RMS error) for the MoG analytical approximation and the single Gaussian approximation, summarized over 20 bins of the numerical data. E, Mean signed error distributions for both approximations. Toolkit script: Fig10_MoGErrorAnalysis2.m.

Tables

  • Figures
  • Extended Data
    • View popup
    Table 1

    General notation

    ValueNotation
    Stimulus valuex
    Sensory measurementm
    Stimulus estimatex^
    Responser
    Likelihood SDσ
    Prior mean, SDν, γ
    Posterior mean, SDμpost, σpost
    • View popup
    Table 2

    Mixture of Gaussians notation

    ValueNotation
    Weight (prior component i)wi
    Mean (prior component i)νi
    SD (prior component i)γi

Extended Data

  • Figures
  • Tables
  • Extended Data 1

    The archive bayesIdealObserverMoG.zip contains MATLAB code for generating the figures in this document, for performing the model comparison presented in Figures 9 and 10, and for fitting psychophysical data from stimulus estimation and 2AFC tasks. Download Extended Data 1, ZIP file.

Back to top

In this issue

eneuro: 10 (1)
eNeuro
Vol. 10, Issue 1
January 2023
  • Table of Contents
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this eNeuro article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data
(Your Name) has forwarded a page to you from eNeuro
(Your Name) thought you would be interested in this article in eNeuro.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data
Tyler S. Manning, Benjamin N. Naecker, Iona R. McLean, Bas Rokers, Jonathan W. Pillow, Emily A. Cooper
eNeuro 31 October 2022, 10 (1) ENEURO.0144-22.2022; DOI: 10.1523/ENEURO.0144-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Share
A General Framework for Inferring Bayesian Ideal Observer Models from Psychophysical Data
Tyler S. Manning, Benjamin N. Naecker, Iona R. McLean, Bas Rokers, Jonathan W. Pillow, Emily A. Cooper
eNeuro 31 October 2022, 10 (1) ENEURO.0144-22.2022; DOI: 10.1523/ENEURO.0144-22.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Significance Statement
    • Introduction
    • Materials and Methods
    • Results
    • Discussion
    • Footnotes
    • References
    • Synthesis
  • Figures & Data
  • Info & Metrics
  • eLetters
  • PDF

Keywords

  • ideal observer models
  • perception
  • Bayesian inference

Responses to this article

Respond to this article

Jump to comment:

No eLetters have been published for this article.

Related Articles

Cited By...

More in this TOC Section

Research Article: Methods/New Tools

  • Adapt-A-Maze: An Open Source Adaptable and Automated Rodent Behavior Maze System
  • Generation of iPSC lines with tagged α-synuclein for visualization of endogenous protein in human cellular models of neurodegenerative disorders
  • Chronic Intraventricular Cannulation for the Study of Glymphatic Transport
Show more Research Article: Methods/New Tools

Novel Tools and Methods

  • Adapt-A-Maze: An Open Source Adaptable and Automated Rodent Behavior Maze System
  • Chronic Intraventricular Cannulation for the Study of Glymphatic Transport
  • Generation of iPSC lines with tagged α-synuclein for visualization of endogenous protein in human cellular models of neurodegenerative disorders
Show more Novel Tools and Methods

Subjects

  • Novel Tools and Methods
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Latest Articles
  • Issue Archive
  • Blog
  • Browse by Topic

Information

  • For Authors
  • For the Media

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Feedback
(eNeuro logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
eNeuro eISSN: 2373-2822

The ideas and opinions expressed in eNeuro do not necessarily reflect those of SfN or the eNeuro Editorial Board. Publication of an advertisement or other product mention in eNeuro should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in eNeuro.