Meta-analysis, literally the statistical analysis of statistical analyses, describes a set of procedures for systematically reviewing the research examining a particular effect and combining the results of independent studies to estimate the population effect size.

By pooling study-specific estimates of a common effect size and adjusting those estimates for sampling and measurement error, a meta-analyst can generate a weighted mean estimate of the effect size that normally reflects the true population effect size more accurately than any of the individual estimates on which it is based.

How is this possible?

To reduce the variation attributable to sampling error, estimates obtained from small samples are given less weight than estimates obtained from large samples. (Click here to see a simple example.)

To reduce the effects of measurement error, estimates are sometimes adjusted by dividing each study’s effect size by the square root of the reliability of the measure(s) used in that study (usually the Cronbach’s alpha). Estimates obtained from less reliable measures are thus adjusted upwards to compensate.

There are at least three reasons for doing a meta-analysis.

This entry was posted on Sunday, May 30th, 2010 at 10:52 pm and is filed under meta-analysis. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

“The primary product of a research inquiry is one or more measures of effect size, not p values.”
~ Jacob Cohen

“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.”
~ Gene Glass

Gold! you save my day 😉