For any statistical test, the probability of making a Type I error is denoted by the Greek letter **alpha** (*α*), and the probability of making a Type II error is denoted by Greek letter **beta** (*β*).

Alpha (or beta) can range from 0 to 1 where 0 means there is no chance of making a Type I (or Type II) error and 1 means it is unavoidable.

Following Fisher, the **critical level of alpha** for determining whether a result can be judged statistically significant is conventionally set at .05. Where this standard is adopted the likelihood of a making Type I error – or concluding there is an effect when there is none – cannot exceed 5%.

For the past 80 years, alpha has received all the attention. But few researchers seem to realize that alpha and beta levels are related, that as one goes up, the other must go down. While alpha safeguards us against making Type I errors, it does nothing to protect us from making Type II errors.

A well thought out research design is one that assesses the relative risk of making each type of error then strikes an appropriate balance between them. For more, see my jargon-free guide *Statistical Power Trip*…