Matches in SemOpenAlex for { <https://semopenalex.org/work/W1837134602> ?p ?o ?g. }
Showing items 1 to 76 of
76
with 100 items per page.
- W1837134602 endingPage "487" @default.
- W1837134602 startingPage "485" @default.
- W1837134602 abstract "Preclinical studies are, by definition, performed with the purpose of improving human health. Yet promising findings from preclinical studies most often fail to translate to the clinic. The high drug attrition rate is startling; in cancer research, 95% of anticancer drugs fail at Phase I clinical trials,1 and attrition rates in stroke drug discovery are over 99%.2 Lack of reproducibility of preclinical studies may be an important driver of this failed translation. In oncology and cardiovascular research, industry scientists reported that in almost two-thirds of the projects (43 of 67) they were unable to replicate the major findings of published research.3 In a separate study of ‘landmark’ publications in cancer research 89% (47 of 53) of preclinical findings could not be reproduced.4 Some have suggested that the scientific reward system does not place adequate emphasis on investigators doing rigorous studies and reporting reproducible results.5 These problems have led to increased focus on the importance of rigor in the design, conduct and reporting of studies in preclinical research6, 7 and the reproducibility of preclinical research. A complex array of factors may contribute to a lack of reproducibility, including: poor reporting of methods; poor experimental design, such as a lack of methods to minimize bias (e.g., blinding and randomization); insufficient sample sizes; and inappropriate statistical analysis of results. Lack of prior publication of study protocols (including statistical analysis plans) may allow less scrupulous investigators to adopt a flexible approach to data analysis and exclusions, collecting several outcomes and conducting numerous statistical tests on the same data, and reporting only those which reach 5% significance and which allow a persuasive interpretation of their data consistent with their proposed hypothesis. Indeed, without the availability of a study protocol, it is impossible to know if the hypotheses being tested had even been articulated prior to data analysis, or whether in fact there has been over-interpretation of the results of studies that were designed to be hypothesis-generating. Across a range of neurological conditions (Alzheimer's disease, multiple sclerosis, Parkinson's disease, intracerebral hemorrhage and focal ischemia) systematic reviews of the preclinical literature show that the reporting of measures to reduce the risk of bias is consistently low. Few studies report blinded assessment of outcome, randomization to group, allocation concealment or power calculations to determine sample size.8 The impact of failure to report such measures has also been investigated, and non-blinded and non-randomized studies generally report greater drug efficacy than blinded or randomized studies, respectively.9, 10 We know also that underpowered experiments are unlikely to yield robust results and may lead to overstatement of efficacy11 and this lack of statistical rigor will undoubtedly contribute to a failure to reproduce results from another laboratory. Publication bias, where research that reaches publication is not representative of all research that is done, is also prevalent in the preclinical literature where neutral findings are likely to remain unpublished. Publication bias is exacerbated by the incentives to publish novel results. Estimates of the extent of this problem in preclinical stroke research suggests that it leads to a 30% overestimate of efficacy.12 Early work using systematic review and meta-analysis to assess the methodological quality of research and the impact of measures to reduce the risk of bias was conducted largely in the preclinical stroke research field.13 Perhaps understandably, there was some resistance to the idea that these issues might be prevalent and important in other research fields. However, the application of these same tools to animal models of pain,14 Alzheimer's disease,15 spinal cord injury,16 glioma17 and multiple sclerosis,18 has consistently found that the reporting of measures to reduce the risk of bias is low. Against this background, the study by Ting et al. provides important evidence that these issues are prevalent in the field of experimental rheumatology. They searched two rheumatology journals, Annals of the Rheumatic Diseases and Arthritis and Rheumatism, seeking reports of in vivo studies testing interventions published between January and December 2012 and identified 41 studies that met their inclusion criteria. Data extraction was carried out by two independent reviewers with a third reviewer to discuss discrepancies. This methodology is appropriate to provide a snapshot of the experimental rheumatology literature, and identifies papers published since the ARRIVE guidelines for reporting were published in 2010.6 The authors found low reporting of measures to reduce the risk of bias, including randomization (17.1% of studies), blinding (29.3% of studies) and the details of any sample size calculation. This is consistent with findings from other research areas.9-11, 19-21 Additionally, by assessing the quality of studies against the ARRIVE guidelines the authors have identified further potential reasons for failures in reproducibility, for example, low reporting of experimental details such as animal strain and species, housing and husbandry. Based on this evidence of poor reporting, the authors support the implementation of the ARRIVE guidelines in rheumatology research. We congratulate the authors for addressing this important and difficult issue, and hope that, as with other disease areas, their work will provide further impetus for the field to advance with improved rigor. What measures might be taken to secure this advance? There are many ways to tackle these issues but to be successful they require a concentrated effort by all stakeholders; scientists, journals, funding agencies and consumers of research. This effort will require changes to the current process to improve the rigor of the design, conduct and reporting of research. We need strategies which are effective but which are not unnecessarily burdensome to the scientific community.22 Rigour and reproducibility provide cornerstones of science research and the scientific reward system should reflect this. Mandatory availability of time-stamped study protocols, available to peer reviewers and to readers, would give confidence that scientists had set out with predefined statistical analysis plan and would allow any discrepancies between the original research plan and the submitted manuscript to be identified and explained. This would also tackle the problems of P-hacking, fishing for significant results, hypothesizing after results are known (HARKing) and allow for better data sharing. Publication of complete datasets would allow efficient data re-use and replication of data analysis. We should endeavor to improve the methodological quality of the research to minimize biases and ensure that experiments are adequately powered. Recently, Research Councils UK announced changes to their guidelines for animal experiments where funding applicants must provide details of the methods they will use to minimize bias and include a sample size calculation, where appropriate, to ensure that their experiments are adequately powered. This type of initiative, generalized across funding agencies, would help improve the quality of preclinical research. A published article in a scientific journal is the primary way in which research is communicated to the scientific community. Improved transparency of reporting of methods, for example by adhering to the ARRIVE guidelines, may help improve the methodological quality of research indirectly by encouraging scientists to consider these issues and report what has not been done. Increased transparency may also improve the reproducibility of results. To address the issue of publication bias, we must acknowledge that neutral findings or the results of studies where investigators accept the null-hypothesis are just as informative as positive data; it is only by reporting these findings that they can contribute to knowledge. This will take an effort from scientists, journal editors and funding agencies. There is a prevalent opinion that journals will not publish neutral results and so there is no point in scientists investing time preparing and submitting these results for publication. However, a recent analysis of clinical studies tracked from time of inception, regulatory submission, abstract presentation and manuscript submission to journals found no empirical evidence that journals preferentially publish manuscripts rejecting the null-hypothesis rather than neutral findings or studies where investigators accept the null-hypothesis.23 This suggests that at least part of the blame may lie with the investigators themselves. Biomedical research has delivered huge benefits to human health. Not all of this has been ‘low-hanging fruit’, but it is likely that the science which will drive future benefit will need to detect effects which are more subtle and more nuanced. We believe that increased scientific rigor is a necessary precondition of meeting this challenge, in rheumatological disease research as elsewhere. The work of Ting et al. is an important step in this process. We acknowledge support from the National Centre for the Replacement, Refinement & Reduction of Animal Use in Research (NC3Rs) infrastructure award: ivSyRMAF – the CAMARADES – NC3Rs in vivo systematic review and meta-analysis facility." @default.
- W1837134602 created "2016-06-24" @default.
- W1837134602 creator A5027438451 @default.
- W1837134602 creator A5041140051 @default.
- W1837134602 date "2015-06-01" @default.
- W1837134602 modified "2023-09-27" @default.
- W1837134602 title "Increasing value and reducing waste in animal models of rheumatological disease" @default.
- W1837134602 cites W1895993106 @default.
- W1837134602 cites W1975130349 @default.
- W1837134602 cites W1975322262 @default.
- W1837134602 cites W1987777080 @default.
- W1837134602 cites W2019429714 @default.
- W1837134602 cites W2067833766 @default.
- W1837134602 cites W2087730640 @default.
- W1837134602 cites W2098993139 @default.
- W1837134602 cites W2103604682 @default.
- W1837134602 cites W2103994998 @default.
- W1837134602 cites W2105381170 @default.
- W1837134602 cites W2122908200 @default.
- W1837134602 cites W2123685024 @default.
- W1837134602 cites W2137516955 @default.
- W1837134602 cites W2140030055 @default.
- W1837134602 cites W2144025192 @default.
- W1837134602 cites W2155284704 @default.
- W1837134602 cites W2157368934 @default.
- W1837134602 cites W2158641962 @default.
- W1837134602 cites W2164759539 @default.
- W1837134602 cites W2168672683 @default.
- W1837134602 cites W4296816736 @default.
- W1837134602 doi "https://doi.org/10.1111/1756-185x.12703" @default.
- W1837134602 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/26082347" @default.
- W1837134602 hasPublicationYear "2015" @default.
- W1837134602 type Work @default.
- W1837134602 sameAs 1837134602 @default.
- W1837134602 citedByCount "0" @default.
- W1837134602 crossrefType "journal-article" @default.
- W1837134602 hasAuthorship W1837134602A5027438451 @default.
- W1837134602 hasAuthorship W1837134602A5041140051 @default.
- W1837134602 hasBestOaLocation W18371346021 @default.
- W1837134602 hasConcept C105795698 @default.
- W1837134602 hasConcept C127413603 @default.
- W1837134602 hasConcept C177713679 @default.
- W1837134602 hasConcept C2776291640 @default.
- W1837134602 hasConcept C33923547 @default.
- W1837134602 hasConcept C548081761 @default.
- W1837134602 hasConcept C71924100 @default.
- W1837134602 hasConceptScore W1837134602C105795698 @default.
- W1837134602 hasConceptScore W1837134602C127413603 @default.
- W1837134602 hasConceptScore W1837134602C177713679 @default.
- W1837134602 hasConceptScore W1837134602C2776291640 @default.
- W1837134602 hasConceptScore W1837134602C33923547 @default.
- W1837134602 hasConceptScore W1837134602C548081761 @default.
- W1837134602 hasConceptScore W1837134602C71924100 @default.
- W1837134602 hasFunder F4320320333 @default.
- W1837134602 hasIssue "5" @default.
- W1837134602 hasLocation W18371346021 @default.
- W1837134602 hasLocation W18371346022 @default.
- W1837134602 hasOpenAccess W1837134602 @default.
- W1837134602 hasPrimaryLocation W18371346021 @default.
- W1837134602 hasRelatedWork W1506200166 @default.
- W1837134602 hasRelatedWork W1995515455 @default.
- W1837134602 hasRelatedWork W2039318446 @default.
- W1837134602 hasRelatedWork W2080531066 @default.
- W1837134602 hasRelatedWork W2748952813 @default.
- W1837134602 hasRelatedWork W2899084033 @default.
- W1837134602 hasRelatedWork W3031052312 @default.
- W1837134602 hasRelatedWork W3032375762 @default.
- W1837134602 hasRelatedWork W3108674512 @default.
- W1837134602 hasRelatedWork W4252371801 @default.
- W1837134602 hasVolume "18" @default.
- W1837134602 isParatext "false" @default.
- W1837134602 isRetracted "false" @default.
- W1837134602 magId "1837134602" @default.
- W1837134602 workType "article" @default.