The power of any test of statistical significance is defined as the probability that it will reject a false null hypothesis. **Statistical power** is inversely related to beta or the probability of making a Type II error. In short, power = 1 – *β*.

In plain English, statistical power is the likelihood that a study will detect an effect when there is an effect there to be detected. If statistical power is high, the probability of making a Type II error, or concluding there is no effect when, in fact, there is one, goes down.

Statistical power is affected chiefly by the size of the effect and the size of the sample used to detect it. Bigger effects are easier to detect than smaller effects, while large samples offer greater test sensitivity than small samples.

To learn how to calculate statistical power, go here.