Standard errors and confidence intervals in within-subjects designs: generalizing Loftus and Masson (1994) and avoiding the biases of alternative accounts

Psychon Bull Rev. 2012 Jun;19(3):395-404. doi: 10.3758/s13423-012-0230-1.

Abstract

Repeated measures designs are common in experimental psychology. Because of the correlational structure in these designs, the calculation and interpretation of confidence intervals is nontrivial. One solution was provided by Loftus and Masson (Psychonomic Bulletin & Review 1:476-490, 1994). This solution, although widely adopted, has the limitation of implying same-size confidence intervals for all factor levels, and therefore does not allow for the assessment of variance homogeneity assumptions (i.e., the circularity assumption, which is crucial for the repeated measures ANOVA). This limitation and the method's perceived complexity have sometimes led scientists to use a simplified variant, based on a per-subject normalization of the data (Bakeman & McArthur, Behavior Research Methods, Instruments, & Computers 28:584-589, 1996; Cousineau, Tutorials in Quantitative Methods for Psychology 1:42-45, 2005; Morey, Tutorials in Quantitative Methods for Psychology 4:61-64, 2008; Morrison & Weaver, Behavior Research Methods, Instruments, & Computers 27:52-56, 1995). We show that this normalization method leads to biased results and is uninformative with regard to circularity. Instead, we provide a simple, intuitive generalization of the Loftus and Masson method that allows for assessment of the circularity assumption.

Publication types

  • Research Support, N.I.H., Extramural
  • Research Support, Non-U.S. Gov't

MeSH terms

  • Bias*
  • Confidence Intervals*
  • Humans
  • Psychology, Experimental / methods*
  • Psychology, Experimental / standards
  • Psychology, Experimental / statistics & numerical data
  • Research Design*