The Failure of Null Hypothesis Significance Testing When Studying Incremental Changes, and What to Do About It

Pers Soc Psychol Bull. 2018 Jan;44(1):16-23. doi: 10.1177/0146167217729162. Epub 2017 Sep 7.

Abstract

A standard mode of inference in social and behavioral science is to establish stylized facts using statistical significance in quantitative studies. However, in a world in which measurements are noisy and effects are small, this will not work: selection on statistical significance leads to effect sizes which are overestimated and often in the wrong direction. After a brief discussion of two examples, one in economics and one in social psychology, we consider the procedural solution of open postpublication review, the design solution of devoting more effort to accurate measurements and within-person comparisons, and the statistical analysis solution of multilevel modeling and reporting all results rather than selection on significance. We argue that the current replication crisis in science arises in part from the ill effects of null hypothesis significance testing being used to study small effects with noisy data. In such settings, apparent success comes easy but truly replicable results require a more serious connection between theory, measurement, and data.

Keywords: economics; null hypothesis significance testing; replication crisis; statistics.

Publication types

  • Research Support, U.S. Gov't, Non-P.H.S.

MeSH terms

  • Data Interpretation, Statistical
  • Humans
  • Peer Review, Research
  • Reproducibility of Results*
  • Research Design*