Most text books are written to impress rather than instruct. Admittedly, terms like attenuation multiplier, random-effects analysis and Q statistic can be intimidating to the uninitiated. But here’s the secret; basic meta-analysis is easy. If you can add, subtract, multiply and divide, you can do meta-analysis.

Let me prove it by running you through a basic meta-analysis of the following 4 studies:

Study 1: *N* = 60,* r* = -.30, *p* = .02

Study 2: *N* = 240, *r* = -.50, *p* < .001

Study 3: *N* = 20, *r* = .05, *p* =.83

Study 4: *N* = 60, *r* = .30, *p* = .02

where *N* refers to the sample size, *r* refers to the effect size estimate expressed in the correlational metric, and *p* refers to the *p* value or statistical significance of each study’s result.

First off, note how we have two positive results (.05 and .30) and two negative results (-.30 and -.50). If we were reviewing this body of work using a narrative summary, we would find it difficult to come to any conclusion. Half the studies say there’s a positive effect; half say there’s a negative effect. We might think the available evidence is inconclusive. We would be wrong.

We might also note that three of the results are statistically significant, while the result obtained from Study 3 is not. A narrative reviewer might be tempted to ignore this statistically nonsignificant result, but to the meta-analyst all the evidence is important. What is the evidence? It is each study’s independent estimate of the common effect size (the four *r*s). *P* values are not evidence so for our simple meta-analysis we will ignore them altogether.

Our goal is to calculate a weighted mean estimate of the effect size. To do this we will weight each of the four *r*’s by their respective sample sizes. Why? Because results obtained from bigger samples will be less tainted by sampling error and therefore should be given more emphasis in our analysis.

To calculate a weighted mean effect size, we multiply the *N* by the *r* for each study, sum the lot, then divide the result by the combined sample size (*N*_{1} +* N*_{2} + *N*_{3} +* N*_{4}), like this:

(60 x -.30) + (240 x -.50) + (20 x .05) + (60 x .30)

60 + 240 + 20 + 60

= -.31

This result suggests that the population effect is negative in direction and medium in size according to Cohen’s effect size conventions. This is a far more definitive conclusion than what we could have reached using a narrative review.

We could then go on and calculate a standard error and use this to estimate a 95% confidence interval. In this case the interval ranges from -.60 to -.02. It’s not particularly precise, but as it excludes the null value of zero we could say – if we felt the need to – that our mean effect size estimate is statistically significant.

And that’s meta-analysis in a nutshell!

Of course if we were doing this for real we might want to go further and consider whether our four studies are estimating not one, but a sample of population effect sizes. If so, we would need to adopt a slightly more complicated random effects procedure to account for the variability in the sample of parameters (which I actually did when calculating the standard error and CI above).

For a plain-English introduction to meta-analysis that covers both fixed- and random-effects procedures, see *The Essential Guide to Effect Sizes* (chapters 5 and 6).