Matches in SemOpenAlex for { <https://semopenalex.org/work/W4245472950> ?p ?o ?g. }
Showing items 1 to 42 of
42
with 100 items per page.
- W4245472950 endingPage "F171" @default.
- W4245472950 startingPage "F169" @default.
- W4245472950 abstract "EDITORIALGuidelines for reporting statistics in journals published by the American Physiological SocietyDouglas Curran-Everett, and Dale J. BenosDouglas Curran-Everett, and Dale J. BenosPublished Online:01 Aug 2004https://doi.org/10.1152/ajprenal.00186.2004MoreSectionsPDF (38 KB)Download PDF ToolsExport citationAdd to favoritesGet permissionsTrack citations ShareShare onFacebookTwitterLinkedInEmailWeChat Concepts and procedures in statistics are inherent to publications in science. Based on the incidence of standard deviations, standard errors, and confidence intervals in articles published by the American Physiological Society (APS), however, many scientists appear to misunderstand fundamental concepts in statistics (9). In addition, statisticians have documented that statistical errors are common in the scientific literature: roughly 50% of published articles have at least one error (1, 2). This misunderstanding and misuse of statistics jeopardizes the process of scientific discovery and the accumulation of scientific knowledge.In an effort to improve the caliber of statistical information in articles they publish, most journals have policies that govern the reporting of statistical procedures and results. These were the previous guidelines for reporting statistics in the Information for Authors (3) provided by the APS: 1) In the materials and methods, authors were told to “describe the statistical methods that were used to evaluate the data.” 2) In the results, authors were told to “provide the experimental data and results as well as the particular statistical significance of the data.” 3) In the discussion, authors were told to “Explain your interpretation of the data… .” To an author unknowing about statistics, these guidelines gave almost no help.In its 1988 revision of Uniform Requirements (see Ref. 13, p. 260), the International Committee of Medical Journal Editors issued these guidelines for reporting statistics: Describe statistical methods with enough detail to enable a knowledgeable reader with access to the original data to verify the reported results. When possible, quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals). Avoid sole reliance on statistical hypothesis testing, such as the use of P values, which fails to convey important quantitative information. … Give numbers of observations. … References for study design and statistical methods should be to standard works (with pages stated) when possible rather than to papers where designs or methods were originally reported. Specify any general-use computer programs used.The current guidelines issued by the Committee (see Ref. 14, p. 39) are essentially identical. To an author unknowing about statistics, these Uniform Requirements guidelines give only slightly more help.In this editorial, we present specific guidelines for reporting statistics.11Discussions of common statistical errors, underlying assumptions of common statistical techniques, and factors that impact the choice of a parametric or the equivalent nonparametric procedure fall outside the purview of this editorial. These guidelines embody fundamental concepts in statistics; they are consistent with the Uniform Requirements (14) and with the upcoming 7th edition of Scientific Style and Format, the style manual written by the Council of Science Editors (6) and used by APS Publications. We have written this editorial to provide investigators with concrete steps that will help them design an experiment, analyze the data, and communicate the results. In so doing, we hope these guidelines will help improve and standardize the caliber of statistical information reported throughout journals published by the APS.GUIDELINESThe guidelines address primarily the reporting of statistics in the materials and methods, results, and discussion sections of a manuscript. Guidelines 1 and 2 address issues of experimental design.MATERIALS AND METHODSGuideline 1. If in doubt, consult a statistician when you plan your study.The design of an experiment, the analysis of its data, and the communication of the results are intertwined. In fact, design drives analysis and communication. The time to consult a statistician is when you have defined the experimental problem you want to address: a statistician can help you design an experiment that is appropriate and efficient. Once you have collected the data, a statistician can help you assess whether the assumptions underlying the analysis were satisfied. When you write the manuscript, a statistician can help you ensure your conclusions are justified.Guideline 2. Define and justify a critical significance level α appropriate to the goals of your study.For any statistical test, if the achieved significance level P is less than the critical significance level α, defined before any data are collected, then the experimental effect is likely to be real (see Ref. 9, p. 782). By tradition, most researchers define α to be 0.05: that is, 5% of the time they are willing to declare an effect exists when it does not. These examples illustrate that α = 0.05 is sometimes inappropriate.If you plan a study in the hopes of finding an effect that could lead to a promising scientific discovery, then α = 0.10 is appropriate. Why? When you define α to be 0.10, you increase the probability that you find the effect if it exists.In contrast, if you want to be especially confident of a possible scientific discovery, then α = 0.01 is appropriate: only 1% of the time are you willing to declare an effect exists when it does not.A statistician can help you satisfy this guideline (see Guideline 1).Guideline 3. Identify your statistical methods, and cite them using textbooks or review papers. Cite separately commercial software you used to do your statistical analysis.This guideline sounds obvious, but some researchers fail to identify the statistical methods they used.22We include resources that may be useful for general statistics (15), regression analyses (10), and nonparametric procedures (5).When you follow Guideline 1, you can be confident that your statistical methods were appropriate; when you follow this guideline, your reader can be confident also. It is important that you identify separately the commercial software you used to do your statistical analysis.Guideline 4. Control for multiple comparisons.Many physiological studies examine the impact of an intervention on a set of related comparisons. In this situation, the probability that you reject at least one true null hypothesis in the set increases, often dramatically. A multiple comparison procedure33Examples of common multiple comparison procedures include the Newman-Keuls, Bonferroni, and least significant difference procedures (see Ref. 8).protects against this kind of mistake. The false discovery rate procedure may be the best practical solution to the problem of multiple comparisons (see Ref. 8, p. R6–R7).Suppose you study the concurrent impact of some chemical on response variables A, B, C, D, and E. For each of these five variables are listed the achieved significance level Pi and the false discovery rate critical significance level (see Ref. 8, p. R6–R7): If Pi ≤ d*i, then the remaining i null hypotheses are rejected. Because P2 = 0.017 ≤ d*2= 0.020, null hypotheses 2 → 1 are rejected. In other words, after controlling for multiple comparisons using the false discovery rate procedure, only the differences in variables B and C remain statistically significant. The false discovery rate procedure is useful also in the context of pairwise comparisons (see Ref. 8, p. R7).RESULTSGuideline 5. Report variability using a standard deviation.Because it reflects the dispersion of individual sample observations about the sample mean, a standard deviation characterizes the variability of those observations. In contrast, because it reflects the theoretical dispersion of sample means about some population mean, a standard error of the mean characterizes uncertainty about the true value of that population mean. The overwhelming majority of original articles published by the APS report standard errors as apparent estimates of variability (9).To see why a standard error is an inappropriate estimate of variability among observations, suppose you draw an infinite number of samples, each with n independent observations, from some normal distribution. If you treat the sample means as observations, then the standard deviation of these means is the standard error of the sample mean (Fig. 1). A standard error is useful primarily because of its role in the calculation of a confidence interval.Fig. 1.The difference between standard deviation and standard error of the mean. Suppose random variable Y is distributed normally with mean μ = 0 and standard deviation σ = 20 (bottom). If you draw from this population an infinite number of samples, each with n observations, then the sample means will be distributed normally (top). The average of this distribution of sample means is the population mean μ = 0. If n = 16, then the standard deviation SD{ȳ} of this distribution of sample means is SD{ȳ} = σ/ = 20/ = 5, known also as the standard error of the sample mean, SE{ȳ}. (See Ref. 9, p. 779–781.) Its dependence on sample size makes the standard error of the mean an inappropriate estimate of variability among observations.Download figureDownload PowerPointMost journals report a standard deviation using a ± symbol. The ± symbol is superfluous: a standard deviation is a single positive number. Report a standard deviation with notation of this form: As of July 2004, articles published in APS journals will use this notation in accordance with Scientific Style and Format (6).This guideline applies also to a data graphic in which you want to depict variability: report a standard deviation, not a standard error.Guideline 6. Report uncertainty about scientific importance using a confidence interval.A confidence interval characterizes uncertainty about the true value of a population parameter. For example, when you compute a confidence interval for a population mean, you assign bounds to the expected discrepancy between the sample mean ȳ and the population mean μ (see Ref. 9, p. 779–781).The level of confidence in a confidence interval is based on the concept that you draw a large number of samples, each with n observations, from some population. Suppose you measure response variable Y in 200 random samples: you will obtain 200 different sample means and 200 different sample standard deviations. As a consequence, you will calculate 200 different 100(1 − α)% confidence intervals; you expect about 100(1 − α)% of these confidence intervals to include the actual value of the population mean.How do you interpret a single confidence interval? If you calculate a 99% confidence interval for some population mean to be [−19, −3], then you can declare, with 99% confidence, that the population mean is included in the interval [−19, −3].This guideline applies also to a data graphic in which you want to depict uncertainty: report a confidence interval.Guideline 7. Report a precise P value.A precise P value does two things: it communicates more information with the same amount of ink, and it permits each reader to assess individually a statistical result. Suppose the P values associated with the main results of your study are P = 0.057 and P = 0.57. You might be tempted to report each value as P > 0.05 or P = NS. You can communicate that the interpretations of the results differ (see Guideline 10) only if you report the precise P values.Guideline 8. Report a quantity so the number of digits is commensurate with scientific relevance.The resolution and precision of modern scientific instruments is remarkable, but it is unnecessary and distracting to report digits if they have little scientific relevance. For example, suppose you measure blood pressure to within 0.01 mmHg and your sample mean is 115.73 mmHg. How do you report the sample mean? As 115.73, as 115.7, or as 116 mmHg? Does a resolution smaller than 1 mmHg really matter? In contrast, a resolution to 0.001 units is essential for a variable like pH. This guideline is critical to the design of an effective table (11).Guideline 9. In the Abstract, report a confidence interval and a precise P value for each main result.DISCUSSIONGuideline 10. Interpret each main result by assessing the numerical bounds of the confidence interval and by considering the precise P value.If either bound of the confidence interval is important from a scientific perspective, then the experimental effect may be large enough to be relevant. This is true whatever the statistical result—the P value—of the hypothesis test. If P < α, the critical significance level, then the experimental effect is likely to be real (see Ref. 9, p. 782).How do you interpret a P value? Although P values have a limited role in data analysis, Table 1, adapted from Ref. 7, provides guidance. These interpretations are useful only if the power of the study was large enough to detect the experimental effect.Table 1. Interpretation of P values P ValueInterpretationP ≁ 0.10Data are consistent with a true zero effect.0.05 ∼ P ≃ 0.10Data suggest there may be a true effect that differs from zero.0.01 ≁ P ≃ 0.05Data provide good evidence that the true effect differs from zero.P ≃ 0.01Data provide strong evidence that the true effect differs from zero.The symbol ≃ means at or near, ∼ means near, and ≁ means not near. Adapted from Ref. 7.SUMMARYThe specific guidelines listed above can be summarized by these general ones: Analyze your data using the appropriate statistical procedures and identify these procedures in your manuscript: Guidelines 2–4.Report variability using a standard deviation, not a standard error: Guideline 5.Report a precise P value and a confidence interval when you present the result of an analysis: Guidelines 6–10.If in doubt, consult a statistician when you design your study, analyze your data, and communicate your findings: Guideline 1.The mere adherence to guidelines for reporting statistics can never substitute for an understanding of concepts and procedures in statistics. Nevertheless, we hope these guidelines, when used with other resources (4, 8, 9, 11, 12, 14), will help improve the caliber of statistical information reported in articles published by the American Physiological Society.We thank Matthew Strand and James Murphy (National Jewish Medical and Research Center, Denver, CO), Margaret Reich (Director of Publications and Executive Editor, American Physiological Society), and the Editors of the APS Journals for their comments and suggestions.REFERENCES1 Altman DG. Statistics in medical journals: some recent trends. Stat Med 19: 3275–3289, 2000.Crossref | PubMed | ISI | Google Scholar2 Altman DG and Bland JM. Improving doctors' understanding of statistics. J R Stat Soc Ser A 154: 223–267, 1991.Crossref | ISI | Google Scholar3 American Physiological Society. Manuscript sections. In: Information for Authors: Instructions for Preparing Your Manuscript [Online]. APS, Bethesda, MD. http://www.the-aps.org/publications/i4a/prep_manuscript.htm#manuscript_sections [March 2004].Google Scholar4 Bailar JC III and Mosteller F. Guidelines for statistical reporting in articles for medical journals. Ann Intern Med 108: 266–273, 1988.Crossref | PubMed | ISI | Google Scholar5 Conover WJ. Practical Nonparametric Statistics (2nd ed.). New York: Wiley, 1980.Google Scholar6 Council of Science Editors, Style Manual Subcommittee. Scientific Style and Format: The CSE Manual for Authors, Editors, and Publishers (7th ed.). In preparation.Google Scholar7 Cox DR. Planning of Experiments. New York: Wiley, 1958, p. 159.Google Scholar8 Curran-Everett D. Multiple comparisons: philosophies and illustrations. Am J Physiol Regul Integr Comp Physiol 279: R1–R8, 2000.Link | ISI | Google Scholar9 Curran-Everett D, Taylor S, and Kafadar K. Fundamental concepts in statistics: elucidation and illustration. J Appl Physiol 85: 775–786, 1998.Link | ISI | Google Scholar10 Draper NR and Smith H. Applied Regression Analysis (2nd ed.). New York: Wiley, 1981.Google Scholar11 Ehrenberg ASC. Rudiments of numeracy. J R Stat Soc Ser A 140: 277–297, 1977.Crossref | ISI | Google Scholar12 Holmes TH. Ten categories of statistical errors: a guide for research in endocrinology and metabolism. Am J Physiol Endocrinol Metab 286: E495–E501, 2004; 10.1152/ajpendo.00484.2003.Link | ISI | Google Scholar13 International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical journals. Ann Intern Med 108: 258–265, 1988.Crossref | PubMed | ISI | Google Scholar14 International Committee of Medical Journal Editors. Uniform requirements for manuscripts submitted to biomedical journals. Ann Intern Med 126: 36–47, 1997.Crossref | PubMed | ISI | Google Scholar15 Snedecor GW and Cochran WG. Statistical Methods (7th ed.). Ames, IA: Iowa State Univ. Press, 1980.Google ScholarAUTHOR NOTESAddress for reprints and other correspondence: D. Curran-Everett, Division of Biostatistics, M222, National Jewish Medical and Research Center, 1400 Jackson St., Denver, CO 80206 (E-mail: [email protected]). Download PDF Back to Top Next FiguresReferencesRelatedInformationCited ByHow to deal with multiple treatment or dose groups in randomized clinical trials?Fundamental & Clinical Pharmacology, Vol. 21, No. 2 More from this issue > Volume 287Issue 2August 2004Pages F169-F171 Copyright & PermissionsCopyright © 2004 the American Physiological Societyhttps://doi.org/10.1152/ajprenal.00186.2004History Published online 1 August 2004 Published in print 1 August 2004 Metrics" @default.
- W4245472950 created "2022-05-12" @default.
- W4245472950 creator A5087760122 @default.
- W4245472950 date "2004-04-27" @default.
- W4245472950 modified "2023-09-30" @default.
- W4245472950 title "Guidelines for reporting statistics in journals published by the American Physiological Society" @default.
- W4245472950 doi "https://doi.org/10.1152/ajprenal.00186.2004" @default.
- W4245472950 hasPublicationYear "2004" @default.
- W4245472950 type Work @default.
- W4245472950 citedByCount "1" @default.
- W4245472950 crossrefType "journal-article" @default.
- W4245472950 hasAuthorship W4245472950A5087760122 @default.
- W4245472950 hasConcept C105795698 @default.
- W4245472950 hasConcept C161191863 @default.
- W4245472950 hasConcept C2522767166 @default.
- W4245472950 hasConcept C33923547 @default.
- W4245472950 hasConcept C41008148 @default.
- W4245472950 hasConceptScore W4245472950C105795698 @default.
- W4245472950 hasConceptScore W4245472950C161191863 @default.
- W4245472950 hasConceptScore W4245472950C2522767166 @default.
- W4245472950 hasConceptScore W4245472950C33923547 @default.
- W4245472950 hasConceptScore W4245472950C41008148 @default.
- W4245472950 hasIssue "2" @default.
- W4245472950 hasLocation W42454729501 @default.
- W4245472950 hasOpenAccess W4245472950 @default.
- W4245472950 hasPrimaryLocation W42454729501 @default.
- W4245472950 hasRelatedWork W1922851888 @default.
- W4245472950 hasRelatedWork W2054557524 @default.
- W4245472950 hasRelatedWork W2074835391 @default.
- W4245472950 hasRelatedWork W2110479023 @default.
- W4245472950 hasRelatedWork W2991725409 @default.
- W4245472950 hasRelatedWork W3162882601 @default.
- W4245472950 hasRelatedWork W3214485701 @default.
- W4245472950 hasRelatedWork W4232468313 @default.
- W4245472950 hasRelatedWork W4240106746 @default.
- W4245472950 hasRelatedWork W4313323870 @default.
- W4245472950 hasVolume "287" @default.
- W4245472950 isParatext "false" @default.
- W4245472950 isRetracted "false" @default.
- W4245472950 workType "article" @default.