Matches in SemOpenAlex for { <https://semopenalex.org/work/W2897594636> ?p ?o ?g. }
Showing items 1 to 75 of
75
with 100 items per page.
- W2897594636 endingPage "493" @default.
- W2897594636 startingPage "491" @default.
- W2897594636 abstract "Integrating robotic surgery into resident training is challenging. The robotic environment requires reconsideration of the apprenticeship model for surgical training and development of new curricula and instructional approaches to ensure skill acquisition. The surgical literature has mentioned the need to improve resident training in robotic surgery. This article highlights components of the robotic teaching environment that limit the efficacy of current training models. By targeting these components, educators can begin to develop more effective curricula and instructional strategies for surgical residents.The robotic learning environment is complex. It incorporates a physically distant operative field, separating the trainer and the trainee; it makes the surgeon less dependent on assistance from a resident; and it necessitates acquisition of perceptual expertise without tactile information. At teaching hospitals, residents are exposed to an increasing number of robotic procedures, yet this often occurs in the context of observers, not participants. This has resulted in an emerging training gap. By considering relevant cognitive learning theories, we can guide surgical educators to new approaches to reduce this gap.While recent literature highlighted the feasibility and safety of implementing robotic curricula in residency, few studies have evaluated their efficacy, or described curricular components in detail.1 Surgical educators need a deep understanding of the robotic environment to appropriately evaluate the efficacy of resident integration in the operating room.Robotic technology provides independence for surgeons. Using the robot, 1 surgeon controls 4 robotic arms and manipulates the camera independently, decreasing the need for residents as assistants. While beneficial to hospitals with limited staffing, this aspect of robotic surgery presents challenges in teaching settings. Typically, in open or laparoscopic operations, residents obtain technical skills as surgical assistants, providing retraction and tissue manipulation essential for creating a functional operative field. This experience allows learners to understand how the surgeon's movements (degree of tension or retraction) affect the operative field. Residents stand across from, or adjacent to, the attending surgeon throughout the procedure—often with arms entangled in an effort to create adequate visualization. Residents directly observe the attending physician's physical movements, including minute details of individual digit placement,2 while performing each operative step.Robotic surgery technology is entirely different. It creates a physical distance between the operating surgeon, the operative field, and any assistants or learners. Residents are positioned at the bedside assisting with instrument exchange, or seated at a console distant from the sterile operative field. They cannot see the attending's physical movements, and cannot appreciate when the attending surgeon “clutches,” repositioning the hands, maximizing economy of motion. Residents also are unaware when the attending reaches for the foot pedal to swap robotic arms or activate electrocautery. Residents are limited to observing the movements of the robotic arms, either extracorporeally from the bedside or intracorporeally from a console or monitor.To learn to perform the movements as they appear on the screen, the resident must recreate the movements of the surgeon seated at the console. In contrast, in open and laparoscopic surgery, the operating surgeon's movements are open and visible. In the robotic environment, the operating surgeon's movements cannot be fully appreciated. How will residents understand what physical movements on the console are needed to translate into the same observed actions seen on the screen?The frequent experiential instruction that occurs in surgical training becomes complicated by a physically separated operative field (described by Zemel and Koschmann2 as the combination of instructional demonstration, creation of referential practices, and embodied procedures).Residents typically gain operative experience in robotic cases by watching the intracorporeal images on the screen and listening to the attending surgeon explain what he or she is doing. The image on the screen rarely portrays the entire operative field, limiting what the resident can see and learn. Increased magnification from robotic technology frequently results in 1 or 2 of the robotic arms no longer being visible on the screen. An observing resident may not have access to all the information necessary to understand critical principles of robotic surgical technique.Today's robotic technology lacks haptic feedback, requiring robotic surgeons to rely entirely on visual processing to interpret what is happening in the operative field, and many expert robotic surgeons report that, despite a lack of tactile feedback, they can still “feel what they see.”3 Nonrobotic surgeons can relate. Consider this scenario: without touching instruments, the attending surgeon calls out to the resident, “Careful! You're pulling too much.” Right then, the tissue tears, and the resident relaxes retraction to avoid further injury. How could the attending know this? How did the attending feel too much tension? By watching the changes in the tissue response as the resident's instrument pulls, expert surgeons can feel simply by observing images on the screen. But how is this process translated to residents? Once educators have a common language to describe the components of this skill, additional efforts can focus on the best teaching methods to ensure efficient and effective acquisition.To address this challenge, we draw from relevant cognitive science theories. Perceptual learning describes experience-induced modifications in the way we extract perceived information. Using a continuous perception-action cycle, learners develop goal-driven behavior known as perception for action.4 Professional vision describes practices that help novices build disciplined ways of seeing events and understanding their implications for practice.5 Using perception for action and professional vision, learners gain perceptual expertise, often seen as a logical endpoint of the normal trajectory of learning (perceptual learning) in a domain-specific environment. Studies support the notion that perceptual expertise is gained with surgical experience and correlates with skill mastery.6,7 Given the lack of haptic feedback and dependence on visual information guiding operative decisions in robotic surgery, understanding how to develop perception for action is essential for robotic skills mastery.Perceptual learning is common in surgical training—residents develop technical and cognitive skills reciprocally and in situated context. Licensure, as regulated by the American Board of Surgery, requires completion of a defined number of surgical cases (or situated contexts). Although perceptual learning is widely accepted in surgical education, focused instruction using this framework has been absent. How can robotic surgeons articulate their perceptual expertise?Researchers such as Koschmann et al15,16 and Cope et al17 have improved our understanding of how surgeons express what they are seeing during operative procedures. To advance their work, we need to probe surgeons' perceptual expertise. Language and gestures are essential for instruction, and develop within context. We believe through study of the robotic context, language, and associated gestures, components of this skill can be elicited from surgeons. Ensuring development of perceptual expertise will prepare future surgeons for open, laparoscopic, endoscopic, or robotic approaches to surgery. Recommended next steps for the medical education community are shown in the box.To investigate expert surgeons' verbal and nonverbal language of perceptual expertise in robotics, we plan to use microanalysis, a qualitative approach in education research that identifies patterns and themes within the actions taking place in an environment.18 In prior microanalysis of intracorporeal robotic video, we revealed features of the robotic environment not previously appreciated.19 Combining microanalysis of robotic experts describing on-screen activities with semiotics (the investigation into how meaning is created and communicated), we anticipate this will generate a verbal and nonverbal language to describe specific on-screen perceptions for action by expert robotic surgeons. Revealing foundational components of perceptual expertise in surgical practice will allow for investigation and development of instructional approaches using this framework.Surgical residents must learn surgical techniques necessary to perform safe operations using a range of tools and technologies. Revealing how robotic surgery experts use words, gestures, and vocalizations to communicate what they can only see and how elements of their perceptual expertise guide intraoperative decision-making will allow educators to develop methods to cultivate perceptual expertise. Addressing perceptual expertise in surgical training will contribute to ensuring trainees acquire the fundamental skills to successfully navigate a rapidly evolving surgical environment." @default.
- W2897594636 created "2018-10-26" @default.
- W2897594636 creator A5004600009 @default.
- W2897594636 creator A5068585796 @default.
- W2897594636 creator A5075917730 @default.
- W2897594636 creator A5076794085 @default.
- W2897594636 date "2018-10-01" @default.
- W2897594636 modified "2023-09-27" @default.
- W2897594636 title "Is Robotic Surgery Highlighting Critical Gaps in Resident Training?" @default.
- W2897594636 cites W1843236490 @default.
- W2897594636 cites W1908724414 @default.
- W2897594636 cites W1968517452 @default.
- W2897594636 cites W2001416489 @default.
- W2897594636 cites W2048825155 @default.
- W2897594636 cites W2093452349 @default.
- W2897594636 cites W2138205622 @default.
- W2897594636 cites W2168645652 @default.
- W2897594636 cites W2313112108 @default.
- W2897594636 cites W2379759383 @default.
- W2897594636 cites W2560063151 @default.
- W2897594636 cites W2782188018 @default.
- W2897594636 cites W4252535617 @default.
- W2897594636 doi "https://doi.org/10.4300/jgme-d-17-00802.1" @default.
- W2897594636 hasPubMedCentralId "https://www.ncbi.nlm.nih.gov/pmc/articles/6194899" @default.
- W2897594636 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/30377478" @default.
- W2897594636 hasPublicationYear "2018" @default.
- W2897594636 type Work @default.
- W2897594636 sameAs 2897594636 @default.
- W2897594636 citedByCount "6" @default.
- W2897594636 countsByYear W28975946362019 @default.
- W2897594636 countsByYear W28975946362021 @default.
- W2897594636 countsByYear W28975946362022 @default.
- W2897594636 crossrefType "journal-article" @default.
- W2897594636 hasAuthorship W2897594636A5004600009 @default.
- W2897594636 hasAuthorship W2897594636A5068585796 @default.
- W2897594636 hasAuthorship W2897594636A5075917730 @default.
- W2897594636 hasAuthorship W2897594636A5076794085 @default.
- W2897594636 hasBestOaLocation W28975946361 @default.
- W2897594636 hasConcept C17744445 @default.
- W2897594636 hasConcept C19527891 @default.
- W2897594636 hasConcept C199539241 @default.
- W2897594636 hasConcept C2779473830 @default.
- W2897594636 hasConcept C509550671 @default.
- W2897594636 hasConcept C71924100 @default.
- W2897594636 hasConceptScore W2897594636C17744445 @default.
- W2897594636 hasConceptScore W2897594636C19527891 @default.
- W2897594636 hasConceptScore W2897594636C199539241 @default.
- W2897594636 hasConceptScore W2897594636C2779473830 @default.
- W2897594636 hasConceptScore W2897594636C509550671 @default.
- W2897594636 hasConceptScore W2897594636C71924100 @default.
- W2897594636 hasIssue "5" @default.
- W2897594636 hasLocation W28975946361 @default.
- W2897594636 hasLocation W28975946362 @default.
- W2897594636 hasLocation W28975946363 @default.
- W2897594636 hasLocation W28975946364 @default.
- W2897594636 hasLocation W28975946365 @default.
- W2897594636 hasOpenAccess W2897594636 @default.
- W2897594636 hasPrimaryLocation W28975946361 @default.
- W2897594636 hasRelatedWork W1965802029 @default.
- W2897594636 hasRelatedWork W1999407557 @default.
- W2897594636 hasRelatedWork W2006308171 @default.
- W2897594636 hasRelatedWork W2104151291 @default.
- W2897594636 hasRelatedWork W2319790315 @default.
- W2897594636 hasRelatedWork W2899084033 @default.
- W2897594636 hasRelatedWork W2972513998 @default.
- W2897594636 hasRelatedWork W3113343617 @default.
- W2897594636 hasRelatedWork W4381249388 @default.
- W2897594636 hasRelatedWork W4386157523 @default.
- W2897594636 hasVolume "10" @default.
- W2897594636 isParatext "false" @default.
- W2897594636 isRetracted "false" @default.
- W2897594636 magId "2897594636" @default.
- W2897594636 workType "article" @default.