TutorialA tutorial on adaptive design optimization
Introduction
Imagine an experiment in which each and every stimulus was custom tailored to be maximally informative about the question of interest, so that there were no wasted trials, participants, or redundant data points. Further, what if the choice of design variables in the experiment (e.g., stimulus properties and combinations, testing schedule, etc.) could evolve in real time as data were collected, to take advantage of information about the response the moment it is acquired (and possibly alter the course of the experiment) rather than waiting until the experiment is over and then deciding to conduct a follow-up?
The ability to fine-tune an experiment on the fly makes it possible to identify and capitalize on individual differences as the experiment progresses, presenting each participant with stimuli that match a particular response pattern or ability level. More concretely, in a decision making experiment, each participant can be given choice options tailored to her or his response preferences, rather than giving every participant the same, pre-selected list of choice options. As another example, in an fMRI experiment investigating the neural basis of decision making, one could instantly analyze and evaluate each image that was collected and adjust the next stimulus accordingly, potentially reducing the number of scans while maximizing the usefulness of each scan.
The implementation of such idealized experiments sits in stark contrast to the current practice in much of psychology of using a single design, chosen at the outset, throughout the course of the experiment. Typically, stimulus creation and selection are guided by heuristic norms. Strategies to improve the informativeness of an experiment, such as creating all possible combinations of levels of the independent variables (e.g., three levels of task difficulty combined with five levels of stimulus duration), actually work against efficiency because it is rare for all combinations to be equally informative. Making matters worse, equal numbers of participants are usually allotted to each combination of treatments for statistical convenience, even the treatments that may not be informative. Noisy data are often combatted in a brute-force way by simply collecting more of them (this is the essence of a power analysis). The continued use of these practices is not the most efficient use of time, money, and participants to collect quality data.
A better, more efficient way to run an experiment would be to dynamically alter the design in response to observed data. The optimization of experimental designs has a long history in statistics dating back to the 1950s (e.g., Atkinson and Donev, 1992, Atkinson and Federov, 1975, Berry, 2006, Box and Hill, 1967, Lindley, 1956). Psychometricians have been doing this for decades in computerized adaptive testing (e.g., Weiss & Kingsbury, 1984), and psychophysicists have developed their own adaptive tools (e.g., Cobo-Lewis, 1997, Kontsevich and Tyler, 1999). The major hurdle in applying adaptive methodologies more broadly has been computational: Quantitative tools for identifying the optimal experimental design when testing formal models of cognition have not been available. However, recent advances in statistical computing (Doucet et al., 2001, Robert and Casella, 2004) have laid the groundwork for a paradigmatic shift in the science of data collection. The resulting new methodology, dubbed adaptive design optimization (ADO, Cavagnaro, Myung, Pitt, & Kujala, 2010), has the potential to more broadly benefit experimentation in cognitive science and beyond.
In this tutorial, we introduce the reader to adaptive design optimization. The tutorial is intended to serve as a practical guide to apply the technique to simple cognitive models. As such, it assumes a rudimentary level of familiarity with cognitive modeling, such as how to implement quantitative models in a programming or graphical language, how to use maximum likelihood estimation to determine parameter values, and how to apply model selection methods to discriminate models. Readers with little familiarity with these techniques might find Section 3 difficult to follow, but should otherwise be able to understand most of the other sections. We begin by reviewing approaches to improve inference in cognitive modeling. Next we describe the technical details of adaptive design optimization, covering the computational and implementation details. Finally, we present an example application of the methodology for designing experiments to discriminate simple models of memory retention. Readers interested in more technical treatments of the material should consult (Cavagnaro et al., 2010, Myung and Pitt, 2009).
Section snippets
Not all experimental designs are created equal
To illustrate the importance of optimizing experimental designs, suppose that a researcher is interested in empirically discriminating between formal models of memory retention. The topic of retention has been studied for over a century. Years of research have shown that a person’s ability to remember information just learned drops quickly for a short time after learning and then levels off as more and more time elapses. The simplicity of this data pattern has led to the introduction of
The nuts and bolts of adaptive design optimization
In this section we describe the theoretical and computational aspects of ADO in greater detail. The section is intended for readers who are interested in applying the technique to their own cognitive modeling. Our goal is to provide readers with the basic essentials of implementing ADO in their own experiments. Fig. 3 shows a schematic diagram of the ADO framework that involves a series of steps. In what follows we discuss each step in turn.
Illustrative example
To further illustrate how ADO works, we will demonstrate its implementation in a simulation experiment intended to discriminate between power and exponential models of retention. The effectiveness of ADO for discriminating between these models has been demonstrated in simulation (Cavagnaro et al., 2010) and in experiments with human participants (Cavagnaro et al., 2011, Cavagnaro et al., 2009). The intention of this demonstration is provide an easy to understand companion to the technical and
Limitations
While ADO has the potential to significantly improve the efficiency of data collection in psychological sciences, it is important that the reader is aware of the assumptions and limitation of the methodology. First of all, not all design variables in an experiment can be optimized in ADO. They must be quantifiable in such a way that the likelihood function depends explicitly on the values of the design variables being optimized (Myung & Pitt, 2009, p. 511). Consequently, ADO is not applicable
Conclusions
In this article, we provided a tutorial exposition of adaptive design optimization (ADO). ADO allows users to intelligently choose experimental stimuli on each trial of an experiment in order to maximize the expected information gain provided by each outcome. We began the tutorial by contrasting ADO against the traditional, non-adaptive heuristic approach to experimental design, then presented the nuts and bolts of the practical implementation of ADO, and finally, illustrated an application of
Acknowledgments
This research is supported by National Institute of Health Grant R01-MH093838 to J.I.M. and M.A.P. The C++ code for the illustrative example is available upon request from authors.
References (64)
- et al.
Optimizing simulated annealing schedule with genertic programming
European Journal of Operational Research
(1996) - et al.
Bayesian daptive estimation of psychometric slope and threshold
Vision Research
(1999) - et al.
Bayesian adaptive estimation: the next dimension
Journal of Mathematical Psychology
(2006) - et al.
Bayesian adaptive estimation of threshold versus contrast external noise functions: the quick method
Vision Research
(2006) - et al.
A tutorial on approximate bayesian computation
Journal of Mathematical Psychology
(2012) - et al.
Accumulative prediction error and the selection of time series models
Journal of Mathematical Psychology
(2006) Information theory and an extension of the maximum likelihood principle
- et al.
Bayesian-optimal design via interacting particle systems
Journal of the American Statistical Association
(2006) - et al.
An introduction to MCMC for machine learning
Machine Learning
(2003) - et al.
Optimum Experimental Designs
(1992)
Optimal design: experiments for discriminating between several models
Biometrika
Adaptive approximate Bayesian computation
Biometrika
Dynamic Programming (reprint edition)
Bayesian clinical trials
Nature Reviews
Discrimination among mechanistic models
Technometrics
Model Selection and Multi-Model Inference: A Practical Information—Theoretic Approach
Bayes and Empirical Bayes Methods for Data Analysis
Optimal decision stimuli for risky choice experiments: an adaptive approach
Management Science
Adaptive design optimization: a mutual information based approach to model discrimination in cognitive science
Neural Computation
Discriminating among probability weighting functions using adaptive design optimization
Journal of Risk and Uncertainty
Model discrimination through adaptive experimentation
Psychonomic Bulletin & Review
Adaptive design optimization in experiments with people
Advances in Neural Information Processing Systems
Bayesian experimental design: a review
Statistical Science
An adaptive psychophysical method for subject classification
Perception & Psychophysics
Improving generalization with active learning
Machine Learning
Active learning with statistical models
Journal of Artificial Intelligence Research
Elements of Information Theory
Sequential Monte Carlo Methods in Practice
Bayesian Data Analysis
Bayesian Methods: A Social and Behavioral Sciences
A tutorial introduction to the minimum description length principle
Fundamentals of Item Response Theory
Cited by (99)
Reducing sample size requirements by extending discrete choice experiments to indifference elicitation
2023, Journal of Choice ModellingChallenges and Solutions to the Measurement of Neurocognitive Mechanisms in Developmental Settings
2023, Biological Psychiatry: Cognitive Neuroscience and NeuroimagingImproving the Reliability of Cognitive Task Measures: A Narrative Review
2023, Biological Psychiatry: Cognitive Neuroscience and NeuroimagingAdaptive Design Optimization as a Promising Tool for Reliable and Efficient Computational Fingerprinting
2023, Biological Psychiatry: Cognitive Neuroscience and NeuroimagingA z-Tree implementation of the Dynamic Experiments for Estimating Preferences [DEEP] method
2023, Journal of Behavioral and Experimental FinanceIndividual differences in computational psychiatry: A review of current challenges
2023, Neuroscience and Biobehavioral Reviews