Matches in SemOpenAlex for { <https://semopenalex.org/work/W2802756822> ?p ?o ?g. }
Showing items 1 to 72 of
72
with 100 items per page.
- W2802756822 endingPage "426" @default.
- W2802756822 startingPage "424" @default.
- W2802756822 abstract "Over the past decade, ultrasound simulators have been used increasingly as an adjunct to clinical training in the field of obstetrics and gynecology. Studies have shown that simulation-based ultrasound training leads to sustained improvement in clinical performance1, including improved diagnostic accuracy2, reduced need for supervised practice and decreased patient discomfort3. In addition to being a valuable tool for training, ultrasound simulators can also be used for the assessment of trainee competence. There are several potential advantages associated with the use of simulation-based assessment of competence, but also some limitations and challenges. To that end, one of the main focus areas of the Basic Training Task Force, which was established by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) in 2015, is to clarify the role of ultrasound simulators in the assessment of trainee competence. The purpose of this editorial is to provide a summary of the recommendations proposed by the ISUOG Basic Training Task Force on how to use simulation-based assessments of ultrasound competence, and to address the associated advantages and challenges. There are many good reasons to consider the use of ultrasound simulators for the assessment of trainee competence as an adjunct or alternative to in-training assessment in the clinical setting. Firstly, in-training assessment of competence in the clinical setting requires the investment of a considerable amount of time by the faculty, which is not always possible owing to competing clinical obligations. Moreover, the use of real patients for intimate examinations such as the assessment of transvaginal ultrasound skills is neither feasible nor ethical when conducting large-scale licensing exams4. In these cases, simulation-based assessment of competence may be used instead, without compromising significantly the validity of assessments5. For example, at the 2016 annual obstetrics and gynecology ultrasound licensing examination that took place in Paris, more than 100 trainees were evaluated on a transvaginal ultrasound simulator in order to determine their diagnostic accuracy across five cases with or without pathology. Secondly, in-training assessment of competence in the clinical setting is challenged by a lack of standardized assessment methods and criteria, which often vary between different examiners, institutions and countries6. On the other hand, simulation-based assessment has the advantage of providing standardized and valid measures of competence that can be compared between institutions and countries7. Thirdly, volume-based measures, i.e. the number of examinations that have been performed by a trainee, may be poor indicators of competence compared with performance-based measures, such as those achieved through direct observation of scans in the clinical setting or through assessments in the simulation setting. Instead of being directly evaluated on their practical skills, trainees are often allowed to practice independently after having completed a certain number of supervised scans. For example, ISUOG's basic training recommendations advise that trainees should have completed at least 100 supervised scans before independent practice is commenced8. However, there is little empirical evidence to support the idea that performing a certain number of scans ensures trainee competence. On the contrary, studies have found that the time trainees need to attain expert levels of competence in performing ultrasound varies significantly7, 9, 10. Given that the ultimate goal is that trainees practice safely and independently, setting a fixed number of scans as an indicator of competence may result in allowing some trainees who have not yet achieved the required competence to practice independently, while delaying independent practice for those who have attained competence through fewer than the designated number of scans1. Recently, we examined the relationship between diagnostic accuracy and number of completed ultrasound examinations in a large group of sonologists. Those who had completed more than 100 scans had an average diagnostic accuracy of 70%, which improved only slightly (to 72%) for sonologists who had completed more than 300 scans (Tolsgaard & Chalouhi, unpubl. data). Ultrasound simulators may be used for the assessment of competence in two ways. First, several virtual-reality simulators have built-in measurement functions that enable automatic assessment of trainee performance during training. These built-in measurement functions, also called simulator metrics, are often developed by software engineers in collaboration with clinicians. Another way of assessing competence in the simulation setting is through direct observation of trainees using standardized assessment instruments11. Such instruments may be designed for the assessment of one specific procedure or examination using a checklist, or it may include the use of generic rating scales that assess general aspects of ultrasound competence. One example of a commonly used generic-rating scale is the Objective Assessment of Ultrasound Skills scale, which has been examined in several validation studies and is being used for the assessment of both transvaginal and transabdominal ultrasound skills11, 12. It is equally important to ascertain whether performance scores on ultrasound simulators translate into differences in competence as it is to examine the validity of the clinical tests themselves. The validity of simulation-based assessments of competence has been examined in several recent studies5, 7, 9, 10. These studies have examined simulators designed for transabdominal5, 10 as well as transvaginal7, 9 ultrasound skills, predominantly using virtual-reality simulation over low-cost alternatives, such as physical mannequins. Although there is strong evidence to support the validity and reliability of simulation-based assessment of competence, several of these studies found that only about one-third of the built-in simulator metrics were able to discriminate between novices and experts7, 10. This is the ‘Achilles heel’ of simulation-based assessments of ultrasound competence because it demonstrates clearly that we cannot rely on the built-in performance measures of simulators without examining their validity first. For these reasons, we cannot recommend competency testing on specific commercially available simulators, given that these systems undergo constant change and that various different systems are used across the globe. Instead, there is a need to evaluate carefully the conditions that should be met when assessing trainees' ultrasound competence. To this end, in Table 1 we recommend four key practices that should be taken into account when planning simulation-based assessments of trainees' ultrasound competence. If purpose of assessment is to determine who should be licensed for independent practice, then assessment methods should be examined carefully for evidence of validity and reliability. If purpose of assessment is to provide feedback (formative assessment) to trainees, then there is often more value in providing written or oral structured feedback, whereas scores on an assessment instrument are of limited value. If we rely on tests without evidence of validity, what are we really measuring? In several studies, the majority of built-in tests on commercially available simulators were not able to discriminate between complete novices and experts. Such metrics provide no more information than does flipping a coin. With respect to reliability, it is important to ensure that there is a sufficient number of cases to ensure a reliability coefficient of > 0.80 (< 20% noise). In studies of transabdominal ultrasound, this level was achieved when one assessor assessed at least five cases12. We need to acknowledge that the implementation of assessment methods based on ultrasound simulators varies little from the implementation of clinical tests. If we do not examine carefully the purpose of the assessment, its reliability and validity as well as its consequences, we could waste valuable time and resources on testing. This could also result in missed diagnoses and suboptimal scans being performed by unqualified sonographers and trainees, thereby jeopardizing the health and wellbeing of patients. It is our duty to safeguard patients from trainees and sonographers who are not yet ready to practice independently. Respecting and adhering to what we addressed as critical and recommended practice will ensure that ultrasound simulators are valuable instruments, and not trendy toys, when it comes to assessing trainees' competence. We would like to thank all members of the ISUOG Basic Training Task Force and members of the Simulation Working Group for their valuable comments, feedback and assistance with revisions of this editorial." @default.
- W2802756822 created "2018-05-17" @default.
- W2802756822 creator A5058767060 @default.
- W2802756822 creator A5077738540 @default.
- W2802756822 date "2018-10-01" @default.
- W2802756822 modified "2023-10-16" @default.
- W2802756822 title "Use of ultrasound simulators for assessment of trainee competence: trendy toys or valuable instruments?" @default.
- W2802756822 cites W1607559973 @default.
- W2802756822 cites W1984221589 @default.
- W2802756822 cites W2018118305 @default.
- W2802756822 cites W2032812776 @default.
- W2802756822 cites W2081362612 @default.
- W2802756822 cites W2143084719 @default.
- W2802756822 cites W2193799630 @default.
- W2802756822 cites W2281368551 @default.
- W2802756822 cites W2332694654 @default.
- W2802756822 cites W2565697599 @default.
- W2802756822 cites W2568672378 @default.
- W2802756822 cites W4233269345 @default.
- W2802756822 doi "https://doi.org/10.1002/uog.19071" @default.
- W2802756822 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/29667240" @default.
- W2802756822 hasPublicationYear "2018" @default.
- W2802756822 type Work @default.
- W2802756822 sameAs 2802756822 @default.
- W2802756822 citedByCount "14" @default.
- W2802756822 countsByYear W28027568222018 @default.
- W2802756822 countsByYear W28027568222019 @default.
- W2802756822 countsByYear W28027568222020 @default.
- W2802756822 countsByYear W28027568222021 @default.
- W2802756822 countsByYear W28027568222022 @default.
- W2802756822 crossrefType "journal-article" @default.
- W2802756822 hasAuthorship W2802756822A5058767060 @default.
- W2802756822 hasAuthorship W2802756822A5077738540 @default.
- W2802756822 hasBestOaLocation W28027568221 @default.
- W2802756822 hasConcept C100521375 @default.
- W2802756822 hasConcept C126838900 @default.
- W2802756822 hasConcept C143753070 @default.
- W2802756822 hasConcept C162324750 @default.
- W2802756822 hasConcept C187736073 @default.
- W2802756822 hasConcept C19527891 @default.
- W2802756822 hasConcept C509550671 @default.
- W2802756822 hasConcept C71924100 @default.
- W2802756822 hasConceptScore W2802756822C100521375 @default.
- W2802756822 hasConceptScore W2802756822C126838900 @default.
- W2802756822 hasConceptScore W2802756822C143753070 @default.
- W2802756822 hasConceptScore W2802756822C162324750 @default.
- W2802756822 hasConceptScore W2802756822C187736073 @default.
- W2802756822 hasConceptScore W2802756822C19527891 @default.
- W2802756822 hasConceptScore W2802756822C509550671 @default.
- W2802756822 hasConceptScore W2802756822C71924100 @default.
- W2802756822 hasIssue "4" @default.
- W2802756822 hasLocation W28027568221 @default.
- W2802756822 hasLocation W28027568222 @default.
- W2802756822 hasOpenAccess W2802756822 @default.
- W2802756822 hasPrimaryLocation W28027568221 @default.
- W2802756822 hasRelatedWork W1989235247 @default.
- W2802756822 hasRelatedWork W2013133633 @default.
- W2802756822 hasRelatedWork W2373416058 @default.
- W2802756822 hasRelatedWork W2510301982 @default.
- W2802756822 hasRelatedWork W2524843116 @default.
- W2802756822 hasRelatedWork W2899084033 @default.
- W2802756822 hasRelatedWork W2899241859 @default.
- W2802756822 hasRelatedWork W2948634361 @default.
- W2802756822 hasRelatedWork W4304117808 @default.
- W2802756822 hasRelatedWork W4386015189 @default.
- W2802756822 hasVolume "52" @default.
- W2802756822 isParatext "false" @default.
- W2802756822 isRetracted "false" @default.
- W2802756822 magId "2802756822" @default.
- W2802756822 workType "article" @default.