How do I know if my study has enough statistical power?

Let’s say you have designed a study and now you want to know the probability that your study will detect an effect, assuming there is a genuine effect there to be detected. This probability can be calculated by doing a statistical power calculation with power set as the dependent variable. The only tricky part will be in estimating the size of the effect in advance. If your estimate is too high, you will think you have more power than you do.

For example, if you have a sample of N = 50 and you expect the effect size will be equivalent to r = .25, then you will have a 42% probability of getting a statistically significant result given conventional levels of alpha (α_{2} = .05). In other words, your results are not likely to pan out. (You might want to think about ways of boosting the power of your study before proceeding.)

Let’s say you want to determine the minimum effect size that your study will be able to detect given certain levels of alpha and power. Again, you just run a basic power calculation, perhaps using a power calculator, with the effect size set as the dependent variable.

For example, if you set alpha and power at conventional levels of .05 and .80 respectively, and you have a sample of N = 50, then the minimum detectable effect size will be equivalent to r = .38.

This entry was posted on Monday, May 31st, 2010 at 12:45 am and is filed under power analysis, statistical power. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

2 Responses to How do I know if my study has enough statistical power?

I just calculated your example in the second paragraph using G*Power (http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/) and it tells me that the power of that particular test is 0.56. Im fairly certain I plugged in the correct numbers. So is G*Power wrong or are you?

By the way, this website is of tremendous help to me. I already ordered your book!

“The primary product of a research inquiry is one or more measures of effect size, not p values.”
~ Jacob Cohen

“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.”
~ Gene Glass

Dear sir,

I just calculated your example in the second paragraph using G*Power (http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/) and it tells me that the power of that particular test is 0.56. Im fairly certain I plugged in the correct numbers. So is G*Power wrong or are you?

By the way, this website is of tremendous help to me. I already ordered your book!

Kind regards,

Henk.

Try crunching the numbers for a two-tailed test. You get a result like yours when you run a one-tailed test.

I’m glad you like the website!