It’s been a long time since I studied statistics. Remind me, what does a p value represent?

A common misperception is that p = .05 means there is a 5% probability of obtaining the observed result by chance. The correct interpretation is that there is a 5% probability of getting a result this large (or larger) if the effect size equals zero.

A p value is the answer to the question: if the null hypothesis were true, how likely is this result? A low p says “highly unlikely”, making the null improbable and therefore rejectable.

In substantive terms, a p value really tells us very little. As it is a confounded index, it is not a good idea to interpret results on the basis of p values.

For better ways of interpreting results, see this.

Like this:

LikeLoading...

This entry was posted on Sunday, May 30th, 2010 at 11:47 pm and is filed under p values. You can follow any responses to this entry through the RSS 2.0 feed.
Both comments and pings are currently closed.

2 Responses to It’s been a long time since I studied statistics. Remind me, what does a p value represent?

In ‘The Essential Guide to Effect Sizes’ on page 4 you write that “[a] statistically significant result is one that is unlikely to be the result of chance.” This seems to contradict what you say here.

You are very sharp-eyed to pick this up. Well done! The line in the book was flagged by one of my reviewers as being potentially misleading and in one sense it is. As I indicate above and elsewhere in the book a statistically significant result is one that generates a conditional probability value that is lower that certain conventions given a bunch of assumptions pertaining to the null and other things. But it gets a little tiresome to have to say all this every time you discuss the meaning of statistical significance. The point I was trying to make on p.4 is that statistical significance is not the same as practical significance.

“The primary product of a research inquiry is one or more measures of effect size, not p values.”
~ Jacob Cohen

How to manuals

“Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude – not just, does a treatment affect people, but how much does it affect them.”
~ Gene Glass

In ‘The Essential Guide to Effect Sizes’ on page 4 you write that “[a] statistically significant result is one that is unlikely to be the result of chance.” This seems to contradict what you say here.

You are very sharp-eyed to pick this up. Well done! The line in the book was flagged by one of my reviewers as being potentially misleading and in one sense it is. As I indicate above and elsewhere in the book a statistically significant result is one that generates a conditional probability value that is lower that certain conventions given a bunch of assumptions pertaining to the null and other things. But it gets a little tiresome to have to say all this every time you discuss the meaning of statistical significance. The point I was trying to make on p.4 is that statistical significance is not the same as practical significance.