Matches in SemOpenAlex for { <https://semopenalex.org/work/W2996404792> ?p ?o ?g. }
Showing items 1 to 87 of
87
with 100 items per page.
- W2996404792 endingPage "7" @default.
- W2996404792 startingPage "5" @default.
- W2996404792 abstract "Go is an ancient board game in which two players, by placing “stones” on a square grid, aim to surround more territory than the opponent. It was a pivotal moment in the history of humankind when AlphaGo Master, a computer program developed by DeepMind Technologies from the United Kingdom, defeated professional Go player Ke Jie in three games of Go during the 2017 Future of Go Summit in Wuzhen, China. At the time of the match, Ke Jie was the number one player as ranked by the Chinese Weiqi Association, the Japan Go Association, and the Korea Baduk Association, and was considered to be the best Go player in the world. Perhaps less dramatic, but equally important, are the moments when Boston Dynamics’ bipedal humanoid robot named Atlas, picked up a box from the ground and then moved it onto a shelf or when another humanoid robot, Honda’s Asimo (for “Advanced Step in Innovative MObility”) walked to a table, poured juice from a thermos into a cup, and then served the drink. These apparently ordinary tasks, which utilize only low-level sensorimotor skills, actually require more computational resources than do high-level reasoning tasks like playing chess. This is an observation made decades ago by researchers working on artificial intelligence (AI) and robotics, and is termed “Moravec’s paradox” (Moravec, 1990). Robots with artificial intelligence and sensorimotor skills are surpassing humans in the performance of many tasks and doing so at a surprising pace. Many analyses have thus been done to identify the human jobs that are at risk of being replaced by robots (Frey and Osborne, 2013; Chui et al., 2016; Berriman and Hawksworth, 2017; Makridakis, 2017). At least according to one analysis, jobs in the wholesale and retail trades are at the highest risk, while those in education are at the lowest (Berriman and Hawksworth, 2017). It appears that we anatomy educators are safe for now, or are we? Machines that can teach have been around for almost a hundred years, but it was not until the recent advent of deep learning models based on artificial neural networks that these machines can be called intelligent. The teaching machine that Sidney Pressey of The Ohio State University designed in 1924 looked like a typewriter (Fry, 1960). It showed a question through a window to the student, who then answered the question by pressing a key corresponding to the answer that the student chose. Through a mechanical process, the machine recognized whether the answer was correct. If it was, the machine then moved to a new question. The machine could not be considered intelligent since it could only “recognize” one kind of data input (the pressing of a key) and then “respond” mechanically. The new “intelligent” machines are very different. They can take in complex information like human speech, static and dynamic images, or financial data and transform this information into more abstract and composite responses. They are called “deep” learning systems because of the many layers of transformation between input and output, and have diverse applications in speech recognition, computer vision, medical image analysis, autonomous driving, board games, financial fraud detection, etc. Given this level of development, can intelligent machines teach anatomy? Can they replace anatomy educators? Until quite recently, these questions would seem nonsensical, since no attempt had been made to produce such machines. Most intelligent tutoring systems (ITSs) are in disciplines where content materials are digital or can be easily digitized, for example, mathematics (Beal et al., 1998; Craig et al., 2013), computer programming (Mitrovic, 2003), engineering (Zakharov et al., 2005), reading (Heffernan et al., 2006), or music (Miletto et al., 2005). Anatomy, however, deals with the internal structure of organisms. But even that has been digitized to a high degree of accuracy (Ackerman, 1998; Park et al., 2005; Tang et al., 2010). Significant pedagogical innovations have been built upon these digital models, such as virtual reality, augmented reality (Trelease, 2016; Moro et al., 2017), and three-dimensional (3D) printed models (McMenamin et al., 2014; Mogali et al., 2018). But these are just tools. They still need a brain, the anatomy educator, to direct the use of these tools in appropriate and effective ways. However, the recent efforts to train chatbots to teach anatomy (Lam et al., 2018; See et al., 2019) are changing all that. They bring us one step closer to anatomy educators based on AI. Chatbots are also called “conversational agents.” They are computer software programs that can engage a human in a conversation through auditory or textual means. Examples that many people have experience with are Apple’s Siri or Microsoft’s Cortana virtual assistants. Chatbots’ potential in engaging students in educational activities has been recognized (Bii, 2013), and explored in different disciplines: physics (Kumar et al., 2007), computer science (Suleman et al., 2016), psychology (Hayashi, 2016), and English as a foreign language (Fryer et al., 2017). These chatbots, however, have only brains and no bodies, since they are basically computer software programs which interact with students using text that is spoken or visually displayed. They do not have the means to dissect a real human cadaver or point to structures on specimens to teach students, as human anatomy educators do. But these are physical barriers that are easy, though costly at this time, to overcome. There are already robots that can assist in dissections, such as the da Vinci Robotic Assisted Surgical System (Intuitive Surgical Corp., Sunnyvale, CA). There are intelligent machines that can visually recognize various entities, such as medical images (Kahn, 2019), fruits (Shahin et al., 2001), and roads (Zhang and Xu, 2019). A recent review of literature from a variety of diagnostic imaging tests (from radiology, ultrasonography, and pathology) on the diagnostic performance of deep learning systems revealed that outcomes of AI algorithms seem to be equivalent to that of health care professionals (Liu et al., 2019). In response to the explosion of AI in radiology, the Radiological Society of North America (RSNA) launched in 2019 a new journal, Radiology: Artificial Intelligence, to assure that applications of machine learning and AI in the field of imaging are based on solid peer-reviewed research principles (Kahn, 2019). Visual recognition of anatomical structures during the dissection of a human cadaver seems also to be within reach, especially when coupled with three-dimensional medical imaging (Masamune and Hong, 2011). The interactions between chatbots and students can potentially go beyond straightforward question-and-answer sessions, which some anatomy education software has been able to provide for decades. Chatbots can potentially engage students in pedagogically sound conversations that are based on educational principles and frameworks. Intelligent tutoring systems consist of four modules: the expert knowledge module, the student module, the tutoring module, and the user interface module (Nwana, 1990; Freedman, 2000). Among those four modules, the tutoring module is the one that governs the strategies and actions of an ITS. For example, it determines when and how to give guidance and feedback to students and when to move on to a different topic. In anatomy teaching around cadavers or specimens, or even in clinical teaching encounters, there are structured pedagogical approaches that can potentially provide a framework for programming the tutoring module in ITSs, such as the one-minute preceptor (Chan and Wiseman, 2011; Chan and Sharma, 2014; Chan et al., 2015) and the SNAPPS (Summarize, Narrow down, Analyze, Probe, Plan, Select) model (Wolpaw et al., 2003; Irby and Bowen, 2004; Chacko et al., 2007). Chatbots, at least the ones under development, seem to be able to achieve just a few of these tasks, such as helping students learn by engaging in pedagogically sound conversation, and assessing students and providing feedback. Most other tasks seem to be beyond what intelligent machines can achieve, especially those involving a human touch, including fostering the development of nontraditional, discipline-independent skills, being a role model for professionalism, or mentoring new teachers. However, AI in education is still in its infancy. As time progresses, artificial intelligent systems should become better and more flexible by incorporating changes in the academic culture and learning preferences of new generations of students (Baker, 2016). Robots are developing their human touch too. Robots that can recognize emotional states are currently in development (Azuar et al., 2019; Yu and Tapus, 2019). Only time will tell what robots with deep learning ability will be capable of, and whether they will be artificially intelligent and we anatomists remain naturally stupid. It is too early to say that anatomy educators are safe from losing their jobs to robots. However, this is not all bad news. Chatbots, if well developed, can potentially relieve anatomy educators from the repetitive and mundane parts of their work. Collaboration, instead of competition, between AI and anatomy educators, may be a welcome result. Perhaps it does not matter who or what the teachers are. As long as the students we teach and the patients they take care remain human, humanity is at the center of medical and anatomy education." @default.
- W2996404792 created "2019-12-26" @default.
- W2996404792 creator A5019352139 @default.
- W2996404792 creator A5071543991 @default.
- W2996404792 date "2020-01-01" @default.
- W2996404792 modified "2023-10-07" @default.
- W2996404792 title "Artificial Intelligence or Natural Stupidity? Deep Learning or Superficial Teaching?" @default.
- W2996404792 cites W1522156808 @default.
- W2996404792 cites W1836624612 @default.
- W2996404792 cites W2015332284 @default.
- W2996404792 cites W2029746157 @default.
- W2996404792 cites W2074199497 @default.
- W2996404792 cites W2088290987 @default.
- W2996404792 cites W2089146866 @default.
- W2996404792 cites W2092909411 @default.
- W2996404792 cites W2095854295 @default.
- W2996404792 cites W2106649881 @default.
- W2996404792 cites W2133313001 @default.
- W2996404792 cites W2144112108 @default.
- W2996404792 cites W2147422707 @default.
- W2996404792 cites W2149993031 @default.
- W2996404792 cites W2163316355 @default.
- W2996404792 cites W2166657119 @default.
- W2996404792 cites W2339255380 @default.
- W2996404792 cites W2364116629 @default.
- W2996404792 cites W2414246335 @default.
- W2996404792 cites W2460984000 @default.
- W2996404792 cites W2582336686 @default.
- W2996404792 cites W2605897185 @default.
- W2996404792 cites W2614461995 @default.
- W2996404792 cites W2617381106 @default.
- W2996404792 cites W2914783141 @default.
- W2996404792 cites W2976398475 @default.
- W2996404792 cites W2991575677 @default.
- W2996404792 doi "https://doi.org/10.1002/ase.1936" @default.
- W2996404792 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/31837097" @default.
- W2996404792 hasPublicationYear "2020" @default.
- W2996404792 type Work @default.
- W2996404792 sameAs 2996404792 @default.
- W2996404792 citedByCount "1" @default.
- W2996404792 countsByYear W29964047922023 @default.
- W2996404792 crossrefType "journal-article" @default.
- W2996404792 hasAuthorship W2996404792A5019352139 @default.
- W2996404792 hasAuthorship W2996404792A5071543991 @default.
- W2996404792 hasBestOaLocation W29964047921 @default.
- W2996404792 hasConcept C138496976 @default.
- W2996404792 hasConcept C145420912 @default.
- W2996404792 hasConcept C154945302 @default.
- W2996404792 hasConcept C15744967 @default.
- W2996404792 hasConcept C166957645 @default.
- W2996404792 hasConcept C188147891 @default.
- W2996404792 hasConcept C2776608160 @default.
- W2996404792 hasConcept C2776961235 @default.
- W2996404792 hasConcept C41008148 @default.
- W2996404792 hasConcept C95457728 @default.
- W2996404792 hasConceptScore W2996404792C138496976 @default.
- W2996404792 hasConceptScore W2996404792C145420912 @default.
- W2996404792 hasConceptScore W2996404792C154945302 @default.
- W2996404792 hasConceptScore W2996404792C15744967 @default.
- W2996404792 hasConceptScore W2996404792C166957645 @default.
- W2996404792 hasConceptScore W2996404792C188147891 @default.
- W2996404792 hasConceptScore W2996404792C2776608160 @default.
- W2996404792 hasConceptScore W2996404792C2776961235 @default.
- W2996404792 hasConceptScore W2996404792C41008148 @default.
- W2996404792 hasConceptScore W2996404792C95457728 @default.
- W2996404792 hasIssue "1" @default.
- W2996404792 hasLocation W29964047921 @default.
- W2996404792 hasLocation W29964047922 @default.
- W2996404792 hasOpenAccess W2996404792 @default.
- W2996404792 hasPrimaryLocation W29964047921 @default.
- W2996404792 hasRelatedWork W1521181842 @default.
- W2996404792 hasRelatedWork W24234487 @default.
- W2996404792 hasRelatedWork W2748952813 @default.
- W2996404792 hasRelatedWork W2899084033 @default.
- W2996404792 hasRelatedWork W3114974104 @default.
- W2996404792 hasRelatedWork W3201396810 @default.
- W2996404792 hasRelatedWork W4210705208 @default.
- W2996404792 hasRelatedWork W4309491373 @default.
- W2996404792 hasRelatedWork W4367173299 @default.
- W2996404792 hasRelatedWork W4385380264 @default.
- W2996404792 hasVolume "13" @default.
- W2996404792 isParatext "false" @default.
- W2996404792 isRetracted "false" @default.
- W2996404792 magId "2996404792" @default.
- W2996404792 workType "article" @default.