Because the whole point of doing research is that we may learn something about real world effects.
Editors are increasingly asking for authors to provide their effect size estimates because of the growing realization that tests of statistical significance don’t tell us what we really want to know. As Cohen (1990: 1310) famously said:
The primary product of a research inquiry is one or more measures of effect size, not p values.
In the bad old days, researchers looked at their p values to see whether their hypotheses were supported. Get a low p value and, voila!, you had a result. But p values are confounded indexes that actually tell us very little about the phenomena we study. At best, they tell us the direction of an effect, but they don’t tell us how big it is. And if we can’t say whether the effect is large or trivial in size, how can we interpret our result?
The estimation of effect sizes is essential to the interpretation of a study’s results. In the fifth edition of its Publication Manual, the American Psychological Association or APA identified the “failure to report effect sizes” as one of seven common defects editors observed in submitted manuscripts. To help readers understand the importance of a study’s findings, authors were advised that “it is almost always necessary to include some index of effect” (APA 2001: 25).
Many editors have made similar calls thus it is increasingly common for submission guidelines to either encourage or mandate the reporting of effect sizes.