Matches in SemOpenAlex for { <https://semopenalex.org/work/W2717891286> ?p ?o ?g. }
- W2717891286 endingPage "22" @default.
- W2717891286 startingPage "1" @default.
- W2717891286 abstract "Virtual-reality training of surgical skills is a rapidly expanding area of research. The implementation of virtual-reality training of surgeons has shown to have implications for patient safety, but many questions remain unanswered. This Ph.D. project began with a systematic review of the existing knowledge in the field. Thereby, knowledge gaps were identified, and the following studies were planned accordingly. This thesis covers research conducted in three related domains: (1) how best to assess surgical trainees' competence; (2) how to optimize skill acquisition using simulation-based training modalities, including virtual-reality simulation; and (3) which factors influence skill transfer between different environments and different procedures. In the systematic review, we aim to identify the evidence behind the use of simulation-based models in ophthalmology. Due to heterogeneity of the included trials, a qualitative analysis is conducted. We conclude that limited data are available to support the use of simulation-based models for assessment purposes. Even though numerous studies have investigated the use of the automated assessment metrics provided by the EyeSi virtual-reality simulator, validity evidence has not been well established. Efficacy trials show a tendency towards improved surgical performance, including procedural time, and a decrease in complication rates after implementation of virtual-reality training. However, data are limited, and the results are inconsistent so no final conclusions can be made. The second study investigates how best to assess surgical trainees' competence using virtual-reality simulation and an evidence-based performance test of cataract surgical skills on the EyeSi simulator was developed. A total of 42 participants were included, and modules showing discriminative ability between novices and experienced cataract surgeons were included in the final performance test. A benchmark criterion was determined and may be used for future implementation of simulation-based training for novices in cataract surgery. An additional validity study investigates the correlation between virtual-reality performance and motion-tracking metrics from real-life cataract surgeries. Eleven cataract surgeons with different experience levels were included in a national, multicentre study. In this study, we demonstrate that performance on the EyeSi simulator is highly correlated with real-life surgical performance, and may supplement clinical assessments of cataract surgical skills. However, motion-tracking metrics are associated with high levels of interindividual variance, and multiple data sources are still recommended when evaluating surgical skills. In the fourth study, we examine the impact of proficiency-based training, during which the trainee continues training until passing a predefined proficiency criterion. This approach differs from the traditional time-based or repetition-based training programmes. The study was conducted as a national, multicentre study, involving 22 surgeons with different levels of experience who performed three video-recorded cataract surgeries before and after completing a proficiency-based training programme on the EyeSi simulator. The real-life performance evaluations from three masked raters demonstrate a significant effect of virtual-reality training for novice surgeons, as well as intermediate level surgeons, who had performed up to 75 independent operations before completing the simulation-based training. Thus, improvements in surgical skills resulting from proficiency-based training in cataract surgery seem to be transferable from a simulated setting to the operating room. Lastly, we wanted to investigate the specificity of skill acquisition in intraocular surgery – specifically, the potential for skill transfer from cataract to vitreoretinal surgery. Twelve residents in ophthalmology were included in a randomized controlled trial: six residents were assigned to intensive, proficiency-based, cataract surgical training on the EyeSi simulator (cataract trainees), and six residents were assigned to no training (novices). Participants in both study arms repeated training on a vitreoretinal training programme on the EyeSi simulator until reaching their maximum performance level. Our results show that the group of cataract trainees did not perform significantly better than the novices, when comparing initial score, time to reach maximum performance level or maximum score. In conclusion, no significant transfer of surgical skills is evident between cataract surgery and vitreoretinal surgery in a virtual-reality environment. The conclusions of this thesis are: (1) there is a need for evidence-based application of simulation-based training and assessment in ophthalmology; (2) the automated assessment metrics from a virtual-reality simulator can distinguish between surgeons with different surgical experience (number of procedures previously performed and training in different surgical techniques); (3) both novices and surgeons with an intermediate level of experience benefit from proficiency-based training on a virtual-reality simulator; and (4) the acquisition and transfer of skills in intraocular surgery seems to be domain specific, and, as such, we should be more aware of functional alignment (e.g. appropriate movement patterns) rather than structural alignment (e.g. setting) when planning future training programmes. The aim of this thesis is to investigate assessment methods and transfer of technical skills in intraocular surgery – a subset of microsurgery – using virtual-reality simulation. The thesis begins with a brief introduction to this area of research and its implications for patient safety. After that follows a background section, including a description of intraocular surgery characteristics; the theoretical principles underlying the acquisition and assessment of technical skills; outcome measures used in simulation-based research; and virtual-reality simulators in ophthalmology. This section provides a foundation for the hypotheses and research questions of the included studies. Study findings are then summarized, followed by a discussion of these findings, including strengths and limitations, and finally a review of how these findings inform our current understanding of skill acquisition and assessment in microsurgery. The thesis is based on the following original papers: Surgical/technical skills – these expressions are used interchangeably and refer to technical skills within surgery, that is skills associated with surgical technique, including instrument utilization. The expressions cover both procedural and basic skills. Procedural skills – technical skills associated with specific procedures (e.g. the performance of capsulorrhexis in cataract surgery). Basic skills – general surgical skills used for various procedures (e.g. suturing skills). Procedural tasks/modules – tasks on a virtual-reality simulator replicating specific surgical procedures, either as subtasks or whole procedures. Abstract tasks/modules – tasks that do not intend to replicate the real environment but aim to support the acquisition of basic skills. Assessment metrics – measures of quantitative assessment used to track performance. Validity – the degree to which a measurement or test is well founded and accurately measures what it sets out to measure. Skill transfer – the extent to which skills acquired in one environment (or on one task) affect performance in another environment (or on another task). Every day in the healthcare system, adverse events occur in the context of surgical interventions and may result in suboptimal and sometimes devastating outcomes for patients (Makary & Daniel 2016). Risk factors include a variety of patient factors such as age, comorbidities and smoking status, as well as a surgeon factor: more experienced surgeons generally have superior surgical skills as compared to less experienced surgeons, and consequently, their surgical outcomes are better and associated with lower complication rates (Johnston et al. 2010; Ti et al. 2014; Day et al. 2015; Mahmud et al. 2015). Nevertheless, we need new surgeons. Cataract is the world's leading cause of blindness and impaired vision, and presently, the only effective treatment is surgical extraction. Today, cataract surgery is one of the most commonly performed surgical procedures in Western countries. The need for ophthalmic surgeons is expected to rise in the future due to an increasing geriatric population, as well as the inevitable ageing of our existing surgeons (Etzioni et al. 2003; Behndig et al. 2011; Kessel 2011; Gollogly et al. 2013). While some risk factors associated with adverse events and suboptimal patient-related outcomes are nonmodifiable or inevitable, evidence suggests that simulation-based training of surgeons has the potential to improve surgical outcomes (Stefanidis et al. 2014). Simulation can be described as ‘something that is made to look, feel, or behave like something else especially so that it can be studied or used to train people’ (Merriam-Webster definition). Simulation-based training – comprising a wide range of simulation models from pig eyes to highly sophisticated virtual-reality models – enables a safe training environment without any associated patient risk. Virtual-reality simulation, also called technology-enhanced simulation, is the use of interactive computer software and hardware to replicate a real environment. Specifically of interest for this thesis, virtual-reality simulation has been associated with moderate to large effect sizes with respect to surgical skill acquisition and patient-related outcomes for a variety of procedures. Previous studies indicate that surgical skills often improve with virtual-reality training, and the acquired skills may lead to an optimized performance in the operating room, and in turn, contribute to improved patient-related outcomes (Cook et al. 2013c). Thus, virtual-reality training has the potential to improve patient safety by improving the surgeon factor in surgical interventions. Another major benefit of using virtual-reality simulation is that it enables automated – and thereby objective – assessment of performance, providing the trainee and other key persons with independent feedback. For decades, the assessment and selection of future surgeons have been dependent exclusively on the subjective opinions of senior colleagues (Darzi et al. 1999; Muttuvelu & Andersen 2016). The introduction of virtual-reality simulators provides an opportunity for advancement in the assessment of surgical skills. In summary, virtual-reality simulation consists of two features: (1) a training environment; and (2) a skills assessment component. In ophthalmology, the use of simulation-based training and assessment has increased significantly during the last decade (McCannel 2015). Similarly, the body of research evidence has been growing, but the scientific evidence for using simulation-based methods in ophthalmology is still relatively limited. In laparoscopic surgery, there is a larger quantity of evidence supporting different aspects of simulation-based training (Zendejas et al. 2013). Yet, substantial differences in surgical technique make it difficult to transfer the results directly to ophthalmic surgery. Intraocular surgery incorporates all procedures performed within the eye, an organ of approximately 24 mm in diameter. It requires an operating microscope, which is controlled by a foot pedal and enables stereoscopic vision. Some of the instruments also include an operating machine (e.g. one that provides a vacuum and/or ultrasonic energy), which is also navigated via a foot pedal (instrument pedal). Thus, all four extremities are often used simultaneously during parts of an intraocular procedure. The fact that the procedures involve microsurgery is indeed one of the challenges in ophthalmic surgical training. Visuospatial awareness and/or stereoscopic vision play an important role (Nibourg et al. 2015), small incorrect movements can cause injury, highly specialized instruments are used, and hand–foot co-ordination, in addition to eye–hand co-ordination, is essential. Additionally, supervision is difficult as only one can perform the procedure at a time, and if a change of surgeons is needed, the inserted instruments must be removed from the eye and surgeons exchange seats (followed by adjustment of microscope and chair). These factors make the traditional apprenticeship method less suitable. Another important feature of intraocular surgery is the traditionally long learning curves for novice surgeons. For cataract surgery, surgical competency improves significantly after the initial 75–80 cases (Tarbet et al. 1995; Randleman et al. 2007; Taravella et al. 2011). Even for surgeons performing more than 500 surgeries annually, the risk for adverse events is generally higher when compared to surgeons performing more than 1000 surgeries annually (Bell et al. 2007). Cataract surgery is most commonly performed using phacoemulsification, but other techniques include manual small incision cataract surgery (MSICS) and extracapsular cataract extraction (ECCE). All techniques include removal of the opaque lens, which is causing impaired vision and/or visual disturbances of different degrees. Phacoemulsification consists of six major steps: (1) incisions into the anterior chamber; (2) continuous curvilinear capsulorrhexis (CCC); creation of a circular hole in the anterior capsule; (3) hydrodissection; dissecting the lens capsule from the lens matter; (4) phacoemulsification; removal of the lens matter from the capsule using vacuum and/or ultrasonic energy; (5) irrigation and aspiration (I/A), aspiration of residual soft lens matter, and finally; (6) intraocular lens insertion. Step 4 (phacoemulsification) is performed using various chopping techniques or ‘divide and conquer’ – the latter being the most common approach, especially for novice surgeons (Alexander et al. 2012; Sorensen et al. 2012). The entire procedure is performed through incision points <4 mm in size, and rotation around the incisional axis is crucial to avoid trauma to the surrounding tissue. Several of the steps are performed bimanually. The steps perceived as most difficult by surgical trainees and with lowest completion rates are phacoemulsification (here: ‘divide and conquer’) and capsulorrhexis, followed by I/A and lens insertion (Dooley & O'Brien 2006). Serious adverse events include posterior capsule tear and vitreous loss, which are strongly associated with substantial visual loss due to an increased risk of other serious adverse events such as retinal detachment and endophthalmitis (Ti et al. 2014; Day et al. 2015). Randleman et al. (2007) report that vitreous loss occurs in 5.1% of cases for novice surgeons and decreases to 1.9% after the surgeon has completed 80 cases. Vitreoretinal surgery, another type of intraocular surgery, comprises a variety of procedures performed in the posterior part of the eye and includes retinal detachment repair, macular hole surgery and peeling of epiretinal membranes (ERM) among others. Similar to cataract surgery, fine motor skills are critical, in addition to procedural planning and continuous integration of perceptual information. The surgical instruments, including their lengths, are different from cataract surgery, but the concept of a rotation axis through the incisional point is similar. In most countries, vitreoretinal surgery is typically performed by ophthalmologists subspecializing in this area of surgery, whereas residents often perform minor procedures, such as panretinal photocoagulation and intravitreal injections (Shah et al. 2009). Repeated exposure in the clinical setting has been the traditional way of teaching technical skills in the healthcare system. However, numerous factors including patient safety concerns, increased efficiency demands and the constant development of new technology and new treatment modalities, coupled with an increase in patient numbers, have lead to the development of alternative teaching methods. In the following sections, different theoretical approaches for the teaching of technical skills will be reviewed. This is followed by a discussion of more practical approaches, where evidence-based training strategies will be described. The acquisition of technical skills, also called motor skills, has been described by several theoretical models. One such model is the Fitts–Posner Three-Stage Theory, which is a widely accepted motor learning theory, based on distinct cognitive processes involved at different stages of skill execution (Fitts & Posner 1967). Initially, trainees are in a cognitive stage, during which they are attempting to understand what is to be performed. The next step is the associative stage, where progress slows as the trainee begins to modify movement strategies based on feedback. The last step is the autonomous stage where motor movements are performed automatically, requiring less attentional capacity. Performance becomes more fluid with increased experience (Reznick & MacRae 2006). Another descriptive step-wise model defining the development of technical skills is the Dreyfus and Dreyfus model, in which one progresses from novice to expert through five steps (Dreyfus 2004). See Table 1 for an overview of the theoretical models. Often, expertise has been equated with repeated, deliberate practice (Ericsson 2015). However, evidence indicates that this concept is flawed because it does not take into account individual differences (Kulasegaram et al. 2013; Macnamara et al. 2014). The number of completed procedures is not necessarily a comprehensive measure of expertise. In obstetrics, measures of initial skills have shown to explain more of the differences between individuals than procedural volume (Epstein et al. 2013). The development of expertise is a complex process and several factors have been proposed which may account for these individual differences, including working memory capacity. Further characterization of the individual differences that impact skill acquisition remains to be elucidated. Nevertheless, practice of technical skills is still a central part of achieving surgical proficiency and an extensive amount of evidence exists on efficient instructional strategies. Several instructional strategies have been shown to maximize the effect of training (Cook et al. 2013a, Stefanidis et al. 2014). On numerous occasions, it has been demonstrated – also for complex motor skills – that skill practice is most effective if it is structured as multiple training sessions, of short duration, spaced over time (distributed training) and with variable task practice (Nicholls et al. 2016). Feedback to the trainee also impacts the learning outcome significantly (McGaghie et al. 2010). Part-task training is also recommended for some complex tasks/procedures, depending on the level of interactive elements (Spruit & Band 2014). In order to teach complex technical skills, it is essential to take the current abilities of the trainee into consideration (cognitive load awareness; cf. Table 1; Spruit & Band 2014; Nicholls et al. 2016). One widely accepted approach is proficiency-based training, which involves a process of continued training until mastering a pre- and well-defined skill set (McGaghie et al. 2010). This method has shown to be efficient for learning technical skills and ensures that all trainees reach a minimum level of competency. This approach is gaining popularity in favour of time-based or repetition-based training where all trainees train for the same period of time or the same number of repetitions. This latter approach may lead to highly variable skill levels. Proficiency-based training requires: (1) explicit characterization of performance goals – how can proficiency be defined for this specific procedure? (2) assessment of performance – how can proficiency be measured? (3) establishment of benchmark criteria – when is proficiency reached? Mastery learning is a rigorous approach to proficiency-based training and includes baseline testing, clearly defined learning objectives, establishment of a minimum passing benchmark, formative testing (continuous feedback), advancement to next educational unit when mastering the predefined benchmark level, and continued practice (Cook et al. 2013c, McGaghie et al. 2014). This approach is closely related to the deliberate practice model, as defined by Ericsson (2015). It is necessary to assess performance in order to implement proficiency-based training, but assessment of performance is also critical for evaluating milestones and competencies. The latter is becoming an increasingly larger part of surgical practice, especially during residency (Oetting 2009, Accreditation Council for Graduate Medical Education 2014). Moreover, for research purposes, assessment is necessary to measure the effect of a training intervention. Several rating tools for the assessment of technical skills exist. Overall, they can be grouped into three categories: (1) human rater tools, including global rating scales and procedure-specific checklists; (2) automated assessments, for example virtual-reality simulators and motion analysis; and (3) outcomes data, including surgical complication rates. The assessment tools differ in their level of objectivity and how time-consuming they are to apply. Optimally, an assessment tool has several features: (1) validity and reliability evidence; (2) educational impact; (3) cost-effectiveness; and (4) widespread acceptance (Schuwirth & van der Vleuten 2013). In addition, some would add objectivity to this list (Gensheimer et al. 2013). Automated assessment tools are objective assessment tools, which are not influenced by any pre-existing opinions about the trainee. Anonymized video recordings of trainee performances evaluated by masked raters also represent an objective assessment method. All categories of assessment tools may be applied in both clinical and simulation-based settings. Simulation-based assessments often correlate positively with patient-related outcomes (Brydges et al. 2015). Therefore, in select cases, these assessments can replace or support those collected in the clinical setting (such as provider behaviours and patient outcomes). This possibility confers significant benefits because clinical assessments can be difficult to collect due to associated costs and logistical challenges (e.g. infrequent clinical events, nonstandardized settings, etc.). See Table 2 for a comparison between assessments performed in a simulated and clinical setting. Validity evidence – scientific evidence that the metric or test measures what it sets out to measure – is a requirement for useful, sound assessments of performance. The same applies for diagnostic tools in a clinical setting – we need to know whether a tool measures what we think it measures. For example, ear thermometry has been shown not to provide exact measurements of core body temperature (Stavem et al. 1997). Similarly, we have to investigate which measurements are useful for the assessment of technical skills. When evaluating validity evidence for assessment outcomes, several different theoretical frameworks exist which outline a systematic approach. Messick's validity framework is a widely respected approach, which has largely replaced the ‘classical’ framework, consisting of different types of validity (face, content, criterion, construct and concurrent). In Messick's framework, validity evidence is collected from different sources, focusing on the intended use of the assessment. In most validity studies, measures with well-known characteristics are compared to new measures (in Messick's framework: relations with other variables). For example, the correlation between surgical experience (traditional measure of surgical skills) and virtual-reality performance metrics (new measure) is investigated. However, to provide a robust argument for validity, it is often relevant to include multiple sources of validity evidence (Downing 2003); see Table 3 for an overview. Unfortunately, validity evidence is often found to be limited, and in cases where the validity of performance outcomes have been investigated, most studies focus on extreme group comparisons (Cook et al. 2013d ; Cook 2015). A typical example is a comparison between performance scores for a group of medical students and a group of experienced surgeons. A difference in performance score between the two groups is interpreted as evidence of validity based on the conception that more experience leads to superior technical skills (cf. the notion of deliberate practice) – however, the difference between the groups may reflect other differences than the measurement of interest (in this case, technical skills). For example, differences in anatomical knowledge may influence the results (confounding factor) and, therefore, lead to inaccurate conclusions. A more useful approach is the comparison of experienced surgeons to residents, who most often are similar to experienced surgeons except for their extent of surgical experience. The ability to discriminate between these two groups is a necessary, but not sufficient component of validity evidence (Cook 2015). Another possible source of bias in validity arguments related to virtual-reality assessments is a familiarization effect. Specifically, it appears that individuals have different rates of familiarization to the virtual-reality interface, and therefore, it is crucial to include a warm-up period before collecting performance assessments. It is also important to remember that assessments are context-specific, and a validity argument may not necessarily be transferable from one environment to another; for example a human rater assessment tool, which has shown evidence of validity when used for video-recorded surgeries, may not provide meaningful measurements of proficiency when used for the assessment of technical skills in a wet-laboratory model, even if the same type of surgery is evaluated (Schuwirth & van der Vleuten 2011). In this regard, it is important to note that in classical test theory, no distinction is made between error variance and random variation. Systematic variation is only attributable to variation in true scores and not to other possible sources of systematic variation, such as measurement or rater bias. Thus, the reliability coefficient increases as the proportion of error variance in test scores decreases. In practice, reliability can be estimated as test–retest reliability (comparing score results from multiple test repetitions), inter-rater reliability or using internal consistency methods, where the test is split into separate items or modules. One obvious limitation of classical test theory is the conception of variance, partitioned into only two parts: the true score and the undifferentiated random variation. Generalizability theory, on the other hand, can subdivide variance into a variety of different factors (called facets), all of which impact scores. A facet is any variable factor in the test that could be meaningfully related to error variance in the defined context; for example across different tasks/cases or different raters. A fully crossed design means that all trainees complete all tasks and all raters evaluate all trainees. In generalizability theory, variance components (facets) are defined including the object of measurement (trainees) in addition to interactions and residual error using analysis of variance (anova). Once variance components have been estimated, they can be employed along with the number of levels of each facet to develop reliability-like coefficients. The levels can be changed to estimate reliability under different testing conditions, called a decision study (D-study). Conceptually, reliability in generalizability theory is the trainee variance divided by the trainee variance plus the absolute or relative error variance. If the variance due to trainee differences is small and the error variances (facets) are large, the estimate of the generalizability will be a small number, and the score becomes less reliable. When sources of validity and reliability evidence of the assessment metrics (e.g. performance test) have been evaluated, the next step is to define when proficiency has been reached when applying the assessment to training programmes. Benchmark criteria, or standard setting, are the proficiency criteria for passing a performance test. Implementation of proficiency-based training is dependent on the definition of proficiency criteria, which are related to the specific assessment metrics and the assessment tool. The defined criteria should answer the question: How much practice is enough? Several analytical models exist to calculate an appropriate level for the intended purpose, but currently, there is no gold standard. One proposed method is the contrasting groups' method, where the criterion is based on the intersection of test score distributions for the groups of novices and experienced surgeons (Downing & Haladyna 2009). Another proposed method uses data from experienced surgeons, to ensure that trainees reach a level of automaticity, see Table 1 (Stefanidis et al. 2012). This does not mean that proficient trainees are considered to have reached the same level as the experienced surgeons, but rather it is interpreted as acquisition of an adequate level of competence on a particular training model. A third method – though not traditionally used for standard setting – involves analysis of novices' performance curves, such that the plateau on their individual performance curve is identified and used as proficiency criterion. See Table 4 for an overview of standard setting methods. In most cases, identificatio" @default.
- W2717891286 created "2017-06-30" @default.
- W2717891286 creator A5016928264 @default.
- W2717891286 date "2017-06-01" @default.
- W2717891286 modified "2023-09-26" @default.
- W2717891286 title "Intraocular surgery - assessment and transfer of skills using a virtual-reality simulator" @default.
- W2717891286 cites W1251689921 @default.
- W2717891286 cites W1534678246 @default.
- W2717891286 cites W1610280190 @default.
- W2717891286 cites W1687253533 @default.
- W2717891286 cites W1839856391 @default.
- W2717891286 cites W1840100415 @default.
- W2717891286 cites W1867929759 @default.
- W2717891286 cites W1964392332 @default.
- W2717891286 cites W1965372977 @default.
- W2717891286 cites W1967643317 @default.
- W2717891286 cites W1968188509 @default.
- W2717891286 cites W1968215755 @default.
- W2717891286 cites W1969932196 @default.
- W2717891286 cites W1975201651 @default.
- W2717891286 cites W1977813030 @default.
- W2717891286 cites W1978344898 @default.
- W2717891286 cites W1978831226 @default.
- W2717891286 cites W1979717607 @default.
- W2717891286 cites W1981735799 @default.
- W2717891286 cites W1991635648 @default.
- W2717891286 cites W1992142391 @default.
- W2717891286 cites W1992387380 @default.
- W2717891286 cites W1995357055 @default.
- W2717891286 cites W1995689521 @default.
- W2717891286 cites W1996726870 @default.
- W2717891286 cites W1997163183 @default.
- W2717891286 cites W1999834265 @default.
- W2717891286 cites W2001664834 @default.
- W2717891286 cites W2002440178 @default.
- W2717891286 cites W2002800579 @default.
- W2717891286 cites W2008398398 @default.
- W2717891286 cites W2010259736 @default.
- W2717891286 cites W2023058304 @default.
- W2717891286 cites W2028526180 @default.
- W2717891286 cites W2029431906 @default.
- W2717891286 cites W2031055662 @default.
- W2717891286 cites W2031915915 @default.
- W2717891286 cites W2032623393 @default.
- W2717891286 cites W2036966949 @default.
- W2717891286 cites W2037164827 @default.
- W2717891286 cites W2040995048 @default.
- W2717891286 cites W2043857563 @default.
- W2717891286 cites W2054526187 @default.
- W2717891286 cites W2057104375 @default.
- W2717891286 cites W2064970988 @default.
- W2717891286 cites W2069431695 @default.
- W2717891286 cites W2069907313 @default.
- W2717891286 cites W2071818842 @default.
- W2717891286 cites W2074866763 @default.
- W2717891286 cites W2075522449 @default.
- W2717891286 cites W2076758805 @default.
- W2717891286 cites W2077273459 @default.
- W2717891286 cites W2079312004 @default.
- W2717891286 cites W2085487772 @default.
- W2717891286 cites W2086863187 @default.
- W2717891286 cites W2118048099 @default.
- W2717891286 cites W2123457245 @default.
- W2717891286 cites W2130990519 @default.
- W2717891286 cites W2132739296 @default.
- W2717891286 cites W2139029631 @default.
- W2717891286 cites W2140246832 @default.
- W2717891286 cites W2155252238 @default.
- W2717891286 cites W2155690736 @default.
- W2717891286 cites W2157136692 @default.
- W2717891286 cites W2159075585 @default.
- W2717891286 cites W2161254436 @default.
- W2717891286 cites W2163580545 @default.
- W2717891286 cites W2166265073 @default.
- W2717891286 cites W2169217212 @default.
- W2717891286 cites W2170323958 @default.
- W2717891286 cites W2216538792 @default.
- W2717891286 cites W2220989734 @default.
- W2717891286 cites W2322558727 @default.
- W2717891286 cites W2329267618 @default.
- W2717891286 cites W2330566212 @default.
- W2717891286 cites W2346834216 @default.
- W2717891286 cites W2375309923 @default.
- W2717891286 cites W2403233484 @default.
- W2717891286 cites W2472330306 @default.
- W2717891286 cites W2478186602 @default.
- W2717891286 cites W2528971263 @default.
- W2717891286 cites W2561811553 @default.
- W2717891286 cites W2589269133 @default.
- W2717891286 cites W3124307939 @default.
- W2717891286 cites W400846935 @default.
- W2717891286 cites W4231117224 @default.
- W2717891286 cites W1805237832 @default.
- W2717891286 doi "https://doi.org/10.1111/aos.13505" @default.
- W2717891286 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/28626885" @default.
- W2717891286 hasPublicationYear "2017" @default.
- W2717891286 type Work @default.
- W2717891286 sameAs 2717891286 @default.