A common misperception is that *p* = .05 means there is a 5% probability of obtaining the observed result by chance. The correct interpretation is that there is a 5% probability of getting a result this large (or larger) *if* the effect size equals zero.

## What is a *p *value?

A *p* value is the answer to the question: if the null hypothesis were true, how likely is this result? A low *p* says “highly unlikely”, making the null improbable and therefore rejectable.

In substantive terms, a *p* value really tells us very little. As the *p *value is a confounded index, it is never a good idea to interpret results on the basis of *p* values.

HenkSeptember 22, 2011 / 12:31 amIn ‘The Essential Guide to Effect Sizes’ on page 4 you write that “[a] statistically significant result is one that is unlikely to be the result of chance.” This seems to contradict what you say here.

Paul EllisSeptember 22, 2011 / 4:14 pmYou are very sharp-eyed to pick this up. Well done! The line in the book was flagged by one of my reviewers as being potentially misleading and in one sense it is. As I indicate above and elsewhere in the book a statistically significant result is one that generates a conditional probability value that is lower that certain conventions given a bunch of assumptions pertaining to the null and other things. But it gets a little tiresome to have to say all this every time you discuss the meaning of statistical significance. The point I was trying to make on p.4 is that statistical significance is not the same as practical significance.