Let’s say you have designed a study and now you want to know the probability that your study will detect an effect, assuming there is a genuine effect there to be detected. This probability can be calculated by doing a **statistical power calculation** with power set as the dependent variable. The only tricky part will be in estimating the size of the effect in advance. If your estimate is too high, you will think you have more power than you do.

For example, if you have a sample of *N* = 50 and you expect the effect size will be equivalent to *r* = .25, then you will have a 42% probability of getting a statistically significant result given conventional levels of alpha (*α*_{2} = .05). In other words, your results are not likely to pan out. (You might want to think about ways of boosting the power of your study before proceeding.)

Let’s say you want to determine the minimum effect size that your study will be able to detect given certain levels of alpha and power. Again, you just run a basic power calculation, perhaps using a power calculator, with the effect size set as the dependent variable.

For example, if you set alpha and power at conventional levels of .05 and .80 respectively, and you have a sample of *N* = 50, then the minimum detectable effect size will be equivalent to *r* = .38.

More here.

HenkSeptember 21, 2011 / 11:42 pmDear sir,

I just calculated your example in the second paragraph using G*Power (http://www.psycho.uni-duesseldorf.de/aap/projects/gpower/) and it tells me that the power of that particular test is 0.56. Im fairly certain I plugged in the correct numbers. So is G*Power wrong or are you?

By the way, this website is of tremendous help to me. I already ordered your book!

Kind regards,

Henk.

Paul EllisSeptember 22, 2011 / 4:15 pmTry crunching the numbers for a two-tailed test. You get a result like yours when you run a one-tailed test.

I’m glad you like the website!