Many studies done are woefully under-powered. This means they lack the statistical power to detect sought-after effects. In many cases there is a clear need for more power. How can we get it? Here are:
5 ways to increase statistical power
- Increase the sample size
The size of your sample will likely have the biggest effect on the statistical power of your study. So to increase power, increase N. In some cases doubling the N will lead to a greater than doubling of statistical power, but not always. In a few situations increasing the N will have only a marginal effect on statistical power. The point is not to throw money at the problem but to determine your ideal sample size by analyzing the trade-off between sampling costs, which are additive, and the corresponding gains in power, which may be incremental and diminishing.
- Search for bigger effects
Tighter relationships are easier to spot than mediated or moderated relationships, so one way to increase power is to look for outcomes that are closely related to treatments (or dependent variables that are closely related to predictors).
- Reduce measurement error
Unreliable measures are like dirty lenses on telescopes—they make it harder to see what you’re looking for. It is beyond the scope of this short book to examine the relationship between measurement error and statistical power but it’s not a happy one. Measurement error is like a leech sucking the power out of your study.
- Choose appropriate statistical tests for the data
Parametric tests are more powerful than non-parametric tests; directional (one-tailed) tests are more powerful than non-directional (two-tailed) tests; and tests involving metric data are more powerful than tests involving nominal or ordinal data.
- Relax the alpha significance criterion (α)
Take a hammer and smash those stone tablets of Fisher’s. Yes, you will run into institutional opposition—the .05 is held sacred by many. But a thoughtful researcher should be able to make a good argument for relaxing alpha in settings where the risk of a Type II error is greater than the risk of a Type I error. It won’t be easy to convince reviewers, but you could try.
For more, see my book Statistical Power Trip: