For any statistical test, the probability of making a Type I error is denoted by the Greek letter alpha (α), and the probability of making a Type II error is denoted by Greek letter beta (β).

Alpha (or beta) can range from 0 to 1 where 0 means there is no chance of making a Type I (or Type II) error and 1 means it is unavoidable.

Following Fisher, the critical level of alpha for determining whether a result can be judged statistically significant is conventionally set at .05. Where this standard is adopted the likelihood of a making Type I error – or concluding there is an effect when there is none – cannot exceed 5%.

For the past 80 years, alpha has received all the attention. But few researchers seem to realize that alpha and beta levels are related, that as one goes up, the other must go down. While alpha safeguards us against making Type I errors, it does nothing to protect us from making Type II errors. A well thought out research design is one that assesses the relative risk of making each type of error then strikes an appropriate balance between them. For more, see my jargon-free guide Statistical Power Trip…

This entry was posted on Monday, May 31st, 2010 at 12:58 am and is filed under Type I error, Type II error. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

“The primary product of a research inquiry is one or more measures of effect size, not p values.”
~ Jacob Cohen

How to manuals

“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.”
~ Gene Glass