In a comment paper published in this year’s March 21 issue of Nature, three statisticians propose that researchers should abandon the concept of statistical significance. Over 800 scientists have put down their names as signatories to support this idea. What does this mean? Do we stop calculating p values? Do we abandon all statistical analyses?
Don’t panic; this is not something entirely new. The misuse of p values has been recognized for a long time. When we view a result as good or bad simply by whether the result is statistically significant (i.e., p < 0.05), and then decide on whether to publish a result, publication bias occurs. This practice may also cause two similar studies to arrive at different conclusions, therefore contributing to low reproducibility of scientific research. To understand p values, see a previous article on Letpub (A discussion of p values). Even though many statisticians and researchers have recognized the problem associated with the misconceptions and misuse of p values, there have not been any effective measures to stop it. This recent comment paper in Nature might be the push for the scientific community to tackle this problem.
We still need statistical methods to analyze our data. What is proposed in this comment paper is that we should not classify a result into two categories (e.g., effective vs not effective) based on whether the p value is below 0.05. Likewise, we should not use any other statistic to classify the results into two categories. We should view p values as continuous quantities. Most importantly, we should evaluate a result based on its scientific meaning instead of its statistical significance. The American Statistician (Volume 73, 2019, issue sup1) now has a special issue on what we can do when we move beyond p < 0.05. There will be nothing magic to replace p values. We need to choose data analysis methods that are most appropriate for the scientific questions we want to address, and we need better understanding of the statistical tests to accurately interpret the results.
We may see journals revising their guidelines regarding how to report statistical results, but probably not very soon, because the debate is still going on whether statistical significance should be completely banned (H.H.H. Adams, Nature 569, 336; 2019; H. Zhang, Nature 569, 336; 2019). Since February of 2015, the editors of Basic and Applied Social Psychology(BASP) banned the use of the null hypothesis significance testing procedure (i.e., using p values or other statistics to infer statistical significance). A paper in The American Statistician special issue we mentioned aboveassessed articles published in BASP in 2016 and found that the ban seemed to allow authors to overstate their results when making conclusions of their studies, which could be misleading.
No matter which side of the debate you are on, here are some simple tips for you when it comes to reporting p values in your paper:
- Report the actual p values instead of p < 0.05. Note that p can never be exactly 1 or 0, so if your statistical software returns such numbers, write p > 0.999 or p < 0.001.
- Except for the sentence where you indicate your p value threshold for statistical significance, you can remove the word “significant.” When you report the actual p values, it is redundant to say the difference is significant. If you use this word for its other meanings, use another word so readers would not be confused about your intended meaning.
- Report all variables you have analyzed, not just the ones with a p < 0.05. This should give your readers a comprehensive picture of your study.