Matches in SemOpenAlex for { <https://semopenalex.org/work/W4386483132> ?p ?o ?g. }
Showing items 1 to 56 of
56
with 100 items per page.
- W4386483132 endingPage "1698" @default.
- W4386483132 startingPage "1697" @default.
- W4386483132 abstract "A great deal of time and effort has been invested in the pursuit and advancement of the evidence-based health care (EBHC) movement over recent decades. Aligned with this growth, multiple and diverse organizations, including JBI, have sprouted to carry forth the banner of EBHC. Systematic reviews have been a fundamental element of both the supporting narrative for EBHC and also, arguably, the principal focus of the continued investment internationally in research activity in this field since its inception. The efforts of many individuals, including those who are affiliated with these organizations, have contributed a large part of this investment through pursuing methodological research toward best practice in the conduct of evidence syntheses. Countless more researchers and authors have invested in applied research, employing systematic review methodology and methods to answer their questions, with the intent to provide the most trustworthy evidence to inform policy and practice. A timely manuscript in this month’s issue of JBI Evidence Synthesis1 offers an opportunity for reviewers and readers to take stock of and reflect on where the field of evidence synthesis, and organizational directives applicable to the field, including those from JBI, have arrived to date. Kolaski et al.1 highlight current deficiencies and confusion across terminology, methods of synthesis, and the application of the available methods by review authors—all of which rightfully cast doubt on the trustworthiness of many systematic reviews and question their authoritative claim to most appropriately guide decision-making.1 Noteworthy among these issues, Kolaski et al.1 identify problems associated with the classification of primary study designs and the varying taxonomies and algorithms available to assist with classification. This issue is not exclusive to primary research; it is also apparent at the secondary research level, where the waters are further muddied by evolving methodologies of synthesis that continue to emerge. While work has commenced to attempt to tackle this problem and develop a universal taxonomy,2 it is critical that reviewers are provided with guidance to ensure that they are following the most appropriate approach to answer their health care question. In the interim, online tools such as Right Review (https://rightreview.knowledgetranslation.net/) may provide a helpful starting point for reviewers. An additional concern the authors identify is that of redundant reviews, that is, reviews that overlap and may be deemed as wasteful and unnecessary.1 This topic has been discussed repeatedly in the literature, most recently by Puljak and Lund.3 It is a critical issue that we will explore further and discuss in a future editorial. Coupled with all of this is the astute realization that undertaking a systematic review and applying these standards is an onerous undertaking and a pathway that can be fraught with difficulty. This is due to the potential not only for misapplication of methods but also for application of methods that, while available, may not be the most appropriate for the job at hand.1 In light of these observations, Kolaski et al.1 respond by bringing together the latest in methodological advances and best practices to point readers toward solutions and the way forward in the conduct of systematic reviews and their reporting. While some readers may feel it is unnecessary or too simplistic to emphasize the differences between reporting guidance vs guidance for conduct, as Kolaski et al.1 note, we fully support this conversation. As editors of a journal that specializes in evidence syntheses, and as educators who also engage in teaching synthesis methodologies, it is remarkably clear that reviewers continue to struggle with distinguishing the nature and utility of guidance for the conduct of a systematic review vs reporting standards.4 While they are both important to ensuring trustworthiness, they are discrete and should not be considered interchangeable—a well reported review does not equate to the best standards of conduct. Ongoing discourse such as this is needed to ensure both elements are considered when undertaking any type of evidence synthesis. This independent assessment of the field1 offers timely insight for JBI and our program of evidence synthesis.5 The solutions provided by the authors reinforce the importance of ongoing investment in our training and education programs for reviewers.5 Furthermore, necessary points for advancement of our own methods (eg, continued development of the JBI critical appraisal tools for the majority of our quantitative study designs) are identified.1 JBI’s program of methodological development, under the auspices of the JBI Scientific Committee, acknowledged similar issues in 2021, which spawned a program for the revision of these tools.6 The results of this undertaking are now bearing fruit, with revised appraisal tools available for reviewers who wish to continue to use the popular JBI appraisal tools to facilitate the conduct of their review.7 These reviewers may rest assured that transparent processes underpin the development of the tools they are using to assess methodological quality and risk of bias.8 Ongoing investment and integration of best practices into software tools (eg, JBI SUMARI) to facilitate the conduct of systematic reviews will inevitably facilitate the demands for methodological adherence in systematic review conduct.5 Reflecting their rise in popularity, the majority of completed reviews presented in this issue, as with others across recent volumes of JBI Evidence Synthesis, are scoping reviews. At some point in the near future, a similar reflection of deficiencies in the application and conduct of scoping reviews, and concomitant solutions entwined with alignment to best practice standards of conduct and reporting, will be both informative and necessary.9 Systematic reviews remain fundamental pillars of EBHC to guide decision-making in health care. Standards for their conduct have been developed, yet there is ongoing confusion among reviewers. Such confusion can cause errors in the conduct of these demanding and often complex research undertakings to the extent that many published reviews are flawed.1 Considering this sobering realization, we encourage knowledge users to be vigilant when reading and interpreting the results of systematic reviews and to apply similar mechanisms of critique as they would to the results of any research. Despite all of this, when completed according to the best practices in the field, the systematic review can live up to its lofty expectation to inform the way forward to improved outcomes in health. Completion and publication of a systematic review involves multiple stakeholders, including editors and peer reviewers, whose participation alongside the authors in dissemination of scientific research and knowledge also carries a responsibility toward high standards of quality. On reflection and acknowledging that a great deal of work, guidance, education, and facilitation remains to be done across methods and with diverse stakeholders, JBI and JBI Evidence Synthesis are proud to be able to contribute to the advancement of the science of synthesis." @default.
- W4386483132 created "2023-09-07" @default.
- W4386483132 creator A5051888301 @default.
- W4386483132 creator A5080051483 @default.
- W4386483132 date "2023-09-01" @default.
- W4386483132 modified "2023-09-27" @default.
- W4386483132 title "A timely review for systematic reviews" @default.
- W4386483132 cites W3094722022 @default.
- W4386483132 cites W3155461783 @default.
- W4386483132 cites W4223918414 @default.
- W4386483132 cites W4289530029 @default.
- W4386483132 cites W4296330923 @default.
- W4386483132 cites W4310964086 @default.
- W4386483132 cites W4323533974 @default.
- W4386483132 cites W4362556293 @default.
- W4386483132 cites W4379598782 @default.
- W4386483132 doi "https://doi.org/10.11124/jbies-23-00356" @default.
- W4386483132 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/37678157" @default.
- W4386483132 hasPublicationYear "2023" @default.
- W4386483132 type Work @default.
- W4386483132 citedByCount "0" @default.
- W4386483132 crossrefType "journal-article" @default.
- W4386483132 hasAuthorship W4386483132A5051888301 @default.
- W4386483132 hasAuthorship W4386483132A5080051483 @default.
- W4386483132 hasBestOaLocation W43864831321 @default.
- W4386483132 hasConcept C17744445 @default.
- W4386483132 hasConcept C189708586 @default.
- W4386483132 hasConcept C199539241 @default.
- W4386483132 hasConcept C2779473830 @default.
- W4386483132 hasConcept C41008148 @default.
- W4386483132 hasConceptScore W4386483132C17744445 @default.
- W4386483132 hasConceptScore W4386483132C189708586 @default.
- W4386483132 hasConceptScore W4386483132C199539241 @default.
- W4386483132 hasConceptScore W4386483132C2779473830 @default.
- W4386483132 hasConceptScore W4386483132C41008148 @default.
- W4386483132 hasIssue "9" @default.
- W4386483132 hasLocation W43864831321 @default.
- W4386483132 hasLocation W43864831322 @default.
- W4386483132 hasOpenAccess W4386483132 @default.
- W4386483132 hasPrimaryLocation W43864831321 @default.
- W4386483132 hasRelatedWork W1596801655 @default.
- W4386483132 hasRelatedWork W2130043461 @default.
- W4386483132 hasRelatedWork W2350741829 @default.
- W4386483132 hasRelatedWork W2358668433 @default.
- W4386483132 hasRelatedWork W2376932109 @default.
- W4386483132 hasRelatedWork W2382290278 @default.
- W4386483132 hasRelatedWork W2390279801 @default.
- W4386483132 hasRelatedWork W2748952813 @default.
- W4386483132 hasRelatedWork W2899084033 @default.
- W4386483132 hasRelatedWork W2530322880 @default.
- W4386483132 hasVolume "21" @default.
- W4386483132 isParatext "false" @default.
- W4386483132 isRetracted "false" @default.
- W4386483132 workType "article" @default.