What are two approaches to reviewing past research?

May 30, 2010
  1. the qualitative approach, also known as the narrative review
  2. the quantitative approach, also known as meta-analysis

The narrative review is useful for documenting the unfolding story of a particular research theme. The aim is to summarize and synthesize the conclusions of others into a compelling narrative about the effect of interest. Of course, this can be tricky to do when previous researchers have come to different conclusions or have drawn conclusions about substantive effects by looking at p values.

In contrast, meta-analysis completely ignores the conclusions that others have drawn and looks instead at the evidence that has been collected. Evidence, in this case, refers to study-specific estimates of a common population effect size.

By combining the independent estimates into an average effect size, a meta-analysis is able to draw an overall conclusion regarding the direction and magnitude of the effect of interest.

Source: The Essential Guide to Effect Sizes

What’s wrong with the traditional narrative review of the literature?

May 30, 2010

What’s right with it?!

Most of us take a qualitative approach to reviewing the literature for no reason other than that’s what we were taught to do or what we’ve read. But there are at least four problems with narrative reviews:

  1. they are rarely comprehensive
  2. they are highly susceptible to reviewer bias
  3. they seldom take into account differences in the quality of studies
  4. they often come to the wrong conclusion or no conclusion at all, hence the oft-heard call for further research

On each count meta-analysis offers a superior alternative.

For more, see The Essential Guide to Effect Sizes.

What is meta-analysis?

May 30, 2010

Meta-analysis, literally the statistical analysis of statistical analyses, describes a set of procedures for systematically reviewing the research examining a particular effect and combining the results of independent studies to estimate the population effect size.

By pooling study-specific estimates of a common effect size and adjusting those estimates for sampling and measurement error, a meta-analyst can generate a weighted mean estimate of the effect size that normally reflects the true population effect size more accurately than any of the individual estimates on which it is based.

How is this possible?

To reduce the variation attributable to sampling error, estimates obtained from small samples are given less weight than estimates obtained from large samples. (Click here to see a simple example.)

To reduce the effects of measurement error, estimates are sometimes adjusted by dividing each study’s effect size by the square root of the reliability of the measure(s) used in that study (usually the Cronbach’s alpha). Estimates obtained from less reliable measures are thus adjusted upwards to compensate.

There are at least three reasons for doing a meta-analysis.

Can you give me three good reasons for doing a meta-analysis?

May 30, 2010

1.    Meta-analysis is a superior alternative to the narrative review when reviewing past research. At best, a narrative review may be able to inform a conclusion about the direction of an effect. But a meta-analysis will provide you with a point estimate of the effect size and a confidence interval quantifying the precision of the estimate. Meta-analysis will normally permit you to reach a conclusion even when the underlying data come from dissimilar studies reporting conflicting conclusions.

2.    A prospective power analysis will help you determine your target sample size, but a power analysis is only as valid as the estimate of the anticipated effect size on which it is based. A meta-analytic review of past research will often be the best way for informing expectations about likely effect sizes.

3.    Meta-analysis can be used to test hypotheses that are too big to be tested at the level of an individual study. For example, a meta-analysis may examine the effects of contextual moderators such different research settings. A meta-analysis can thus signal promising directions for further theoretical development.

For more, see The Essential Guide to Effect Sizes.

Meta-analysis seems complicated and I’m an old dog. Why are you so convinced I can learn this new trick?

May 30, 2010

I used to teach PhD students with zero experience how to do a meta-analysis in a single class. Admittedly it was a three hour class, but by the end of it students were doing meta-analysis all on their own with no problems at all.

Meta-analysis is conceptually easy. It will take you about 2 minutes to follow my simple example found here.

It’s true that not everyone will want to run a full-blown meta-analysis. But learning to think meta-analytically is an essential skill for any researcher engaged in replication research, theoretical development or who is simply trying to draw conclusions from past work.

For more, see The Essential Guide to Effect Sizes, chapter 5.

Can you show me how to do meta-analysis in just 2 minutes?

May 30, 2010

Most text books are written to impress rather than instruct. Admittedly, terms like attenuation multiplier, random-effects analysis and Q statistic can be intimidating to the uninitiated. But here’s the secret; basic meta-analysis is easy. If you can add, subtract, multiply and divide, you can do meta-analysis.

Let me prove it by running you through a basic meta-analysis of the following 4 studies:

Study 1: N = 60, r = -.30, p = .02
Study 2: N = 240, r = -.50, p < .001
Study 3: N = 20, r = .05, p =.83
Study 4: N = 60, r = .30, p = .02

where N refers to the sample size, r refers to the effect size estimate expressed in the correlational metric, and p refers to the p value or statistical significance of each study’s result.

First off, note how we have two positive results (.05 and .30) and two negative results (-.30 and -.50). If we were reviewing this body of work using a narrative summary, we would find it difficult to come to any conclusion. Half the studies say there’s a positive effect; half say there’s a negative effect. We might think the available evidence is inconclusive. We would be wrong.

We might also note that three of the results are statistically significant, while the result obtained from Study 3 is not. A narrative reviewer might be tempted to ignore this statistically nonsignificant result, but to the meta-analyst all the evidence is important. What is the evidence? It is each study’s independent estimate of the common effect size (the four rs). P values are not evidence so for our simple meta-analysis we will ignore them altogether.

Our goal is to calculate a weighted mean estimate of the effect size. To do this we will weight each of the four r’s by their respective sample sizes. Why? Because results obtained from bigger samples will be less tainted by sampling error and therefore should be given more emphasis in our analysis.

To calculate a weighted mean effect size, we multiply the N by the r for each study, sum the lot, then divide the result by the combined sample size (N1 + N2 + N3 + N4), like this:

(60 x -.30) + (240 x -.50) + (20 x .05) + (60 x .30)

60 + 240 + 20 + 60

= -.31

This result suggests that the population effect is negative in direction and medium in size according to Cohen’s effect size conventions. This is a far more definitive conclusion than what we could have reached using a narrative review.

We could then go on and calculate a standard error and use this to estimate a 95% confidence interval. In this case the interval ranges from -.60 to -.02. It’s not particularly precise, but as it excludes the null value of zero we could say – if we felt the need to – that our mean effect size estimate is statistically significant.

And that’s meta-analysis in a nutshell!

Of course if we were doing this for real we might want to go further and consider whether our four studies are estimating not one, but a sample of population effect sizes. If so, we would need to adopt a slightly more complicated random effects procedure to account for the variability in the sample of parameters (which I actually did when calculating the standard error and CI above).

For a plain-English introduction to meta-analysis that covers both fixed- and random-effects procedures, see The Essential Guide to Effect Sizes (chapters 5 and 6).