Why can’t I just report the R-square? That’s easy enough isn’t it?

When people who are unfamiliar with effect sizes learn that various effect size indexes such as R^{2} are generated automatically by SPSS or STATA, the temptation is to report their R^{2} and just leave it at that.

But the coefficient of multiple determination, or R^{2}, may not be a particularly useful index as it combines the effects of several predictors. If you are interested in the effect of a specific predictor, rather than the omnibus effect arising from all the variables in your model, you might want to consider other options such as the relevant beta coefficient (standardized or unstandardized, depending on what you plan to do with it).

Another option is to report the relevant semipartial or part correlation coefficient which represents the change in Y when X_{1} is changed by one unit while controlling for all the other predictors (X_{2}, … X_{k}). Although both the part and partial correlations can be calculated using SPSS and other statistical programs, the former is typically used when “apportioning variance” among a set of independent variables (Hair et al. 1998: 190).

For a good introduction on how to interpret coefficients in non-linear regression models, see Shaver (2007).

This entry was posted on Monday, May 31st, 2010 at 2:13 am and is filed under effect size. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

“The primary product of a research inquiry is one or more measures of effect size, not p values.”
~ Jacob Cohen

“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.”
~ Gene Glass