Statistical inference is an orderly means for drawing conclusions about a large number of events on the basis of observations collected on a sample of them. As such, it forms an important part of scientific inquiry.
All measures of variable biological parameters should be reported with statistical measures of this variability In general, the sample mean and standard deviation or standard error of the mean (always appropriately labeled!) should be stated. Medians and ranges may also be given, particularly if the reported data show a strong departure from normality.
As a biological or medical scientist writing for others in your field, be sure that both the text and table emphasize biology or medicine, not statistics. Statistical methods do not need elaborate presentation, nor do the mathematics of the test results need to be detailed. A simple statement of the chosen test and probability level is usually sufficient. Reference a basic text detailing the procedure if you feel readers might need it.
Poor: To determine whether the two species differed in their egg cannibalism rate (Table 1), we used the Fisher Exact Probability Test, in which P = (A + B)!(C + D)!(A + C)!(B + D)!/N!A!B!C!D!, to obtain a P = 0.56 which was not significant.
Better: The differences in the egg cannibalism rates of the two species (Table 1) were not statistically significant (Fisher Exact Probability Test, P > 0.05).
Whenever quantitative differences in data are reported that are found not to be due to chance alone, they should be accompanied by statistical statements that are the result of appropriate statistical tests described in the Methods section. In a table, these often are placed in a footnote. (Some journals use letters for footnotes, others symbols, often with a prescribed sequence.) In the text, statistical results are usually presented in a concise style consistent with standard statistics books (ANOVA: Fj „ = 7.98, P < 0.02; Spearman rank correlation: rs = 0.81, N = 12, P < 0.01).
Guard against statements that seem to imply value judgments about the results of statistical analyses with phrases like "nearly reached significance." Do not describe differences that are not statistically significant as insignificant! Likewise, avoid using the term significant to describe results when no statistical tests were run and you merely mean "important."
When statistical analysis was a tedious, largely hand-calculated affair, many a scientist shunned it entirely. Now, with the popularity of statistical analysis software, scientists face a strong temptation to overuse statistics, reporting strings of similar analyses or "massaging" data to an unreasonable degree. Almost nothing is more transparent than reliance upon prepackaged analyses without a corresponding understanding of their meaning.
Was this article helpful?