**FAQs about Effect Size**

What is an effect size?

Can you give me some examples of an effect size?

Why does my research methods textbook have no entry for “effect size”?

Can you give me three reasons for reporting effect sizes?

Why are journal editors increasingly asking authors to report effect sizes?

Which editors have encouraged the reporting of effect sizes?

Why can’t I just judge my result by looking at the *p* value?

Why can’t I just report the *R*^{2}?

What are the two “families” of effect size?

What’s a good effect size index for comparing the means of two groups?

The journal I am submitting to is silent on whether I report effect sizes? Should I do it anyway?

Where can I find a good effect size calculator?

Can you recommend a plain English introduction to effect sizes?

**FAQs about Statistical Power**

What is statistical power?

What are the dangers of having too little or too much statistical power?

How do I calculate statistical power?

What is an ideal level of statistical power?

What do alpha and beta refer to in statistics?

What’s the difference between Type I and II errors?

How big a sample size do I need to test my hypotheses?

How do I know if my study has enough statistical power?

What’s wrong with post hoc power analyses?

Can you recommend a good power calculator?

What are 4 ways I can boost the statistical power of my study?

Which journals have had their mean levels of power assessed?

How does low statistical power lead to Type I errors?

How does low statistical power lead to Type II errors?

Can you recommend a plain English introduction to power analysis?

**FAQs about Interpreting Research Results**

What is the difference between statistical and substantive significance?

How do researchers confuse statistical with substantive significance?

Can a result be statistically nonsignificant but important?

Why should I interpret the substantive significance of my results?

Which journal editors insist researchers interpret the substantive significance of their results?

What does a statistical significance test actually tell us?

What does a *p* value represent?

Why is it a dumb idea to interpret results by looking at *p* values?

Why do you say a *p* value is a confounded index?

I’ve got an effect size estimate. What’s next?

What are the three C’s of interpretation?

What are some conventions for interpreting different effect sizes?

When are small effects important?

What is the “curse of multiplicity”?

What’s wrong with fishing in my dataset?

What’s wrong with HARKing once in a while?

**FAQs about Meta-Analysis**

What are two approaches to reviewing past research?

What’s wrong with the traditional narrative review of the literature?

What is meta-analysis?

What are three good reasons for doing a meta-analysis?

Why are you so convinced I can learn meta-analysis?

Can you show me how to do meta-analysis in just 2 minutes?