How do researchers confuse statistical with substantive significance?

Researchers can confuse statistical significance with substantive significance in one of two ways:

  1. Results that are found to be statistically significant are interpreted as if they were practically meaningful. This happens when a researcher interprets a statistically significant result as being “significant” or “highly significant” in the everyday sense of the word.
  2. Results that are statistically nonsignificant are interpreted as evidence of no effect, even in the face of evidence to the contrary (e.g., a noteworthy effect size).

In some settings statistical significance will be completely unrelated to substantive significance. It is entirely possible for a result to be statistically significant and trivial or statistically nonsignificant yet important. (Click here for an example.)

Researchers get confused about these things when they misattribute meaning to p values. Remember, a p value is a confounded index. A statistically significant p could reflect either a large effect, or a large sample size, or both. Judgments about substantive significance should never be based on p values.

It is essential that researchers learn to distinguish between statistical and substantive significance. Failure to do so leads to Type I and Type II errors, wastes resources, and potentially misleads further research on the topic.

More here.

Uncategorized