A recent post on the Stata blog includes a quite helpful explication of effect size, and various measures of it. Effect size is important, in part, because results are often "assessed by statistical significance, usually that the p-value is less than 0.05. P-values and statistical significance, however, don’t tell us anything about practical significance" (emphasis added). The following hypo (from the post) illustrates:
"What if I told you that I had developed a new weight-loss pill and that the difference between the average weight loss for people who took the pill and the those who took a placebo was statistically significant? Would you buy my new pill? If you were overweight, you might reply, 'Of course! I’ll take two bottles and a large order of french fries to go!' Now let me add that the average difference in weight loss was only one pound over the year. Still interested? My results may be statistically significant but they are not practically significant. Or what if I told you that the difference in weight loss was not statistically significant — the p-value was 'only' 0.06 — but the average difference over the year was 20 pounds? You might very well be interested in that pill. The size of the effect tells us about the practical significance. P-values do not assess practical significance."
Finally, one more practical reason to attend to effect size is that a growing (albeit small) number of journals now require reporting it.
Comments