There are at least five ways to increase the statistical power of a study. The most expensive way is to increase the sample size. Where this is not possible, there are four other options:

1. **Search for bigger effects**

Tighter relationships are easier to spot than mediated or moderated relationships, so one way to increase power is to look for outcomes that are closely related to treatments (or dependent variables that are closely related to predictors).

2. **Reduce measurement error**

Unreliable measures are like dirty lenses on telescopes—they make it harder to see what you’re looking for. It is beyond the scope of this short book to examine the relationship between measurement error and statistical power but it’s not a happy one. Measurement error is like a leech sucking the power out of your study.

3. **Choose appropriate statistical tests for the data**

Parametric tests are more powerful than non-parametric tests; directional (one-tailed) tests are more powerful than non-directional (two-tailed) tests; and tests involving metric data are more powerful than tests involving nominal or ordinal data.

4. **Relax the alpha significance criterion ( α)**

Take a hammer and smash those stone tablets of Fisher’s. Yes, you will run into institutional opposition—the .05 is held sacred by many. But a thoughtful researcher should be able to make a good argument for relaxing alpha in settings where the risk of a Type II error is greater than the risk of a Type I error. It won’t be easy to convince reviewers, but you could try.

For more, see my book* Statistical Power Trip…*