Matches in SemOpenAlex for { <https://semopenalex.org/work/W3023711522> ?p ?o ?g. }
- W3023711522 endingPage "384" @default.
- W3023711522 startingPage "373" @default.
- W3023711522 abstract "As robots become increasingly present in human society, considerable gaps remain between expectations for the social roles these robots might play and their actual abilities.Research examining social cognition when interacting with robots offers a promising avenue for understanding how best to introduce robots to complex social settings, such as in schools, hospitals, and at home.Thanks to methodological advances in human neuroscience, such as mobile neuroimaging, human–robot interaction research is moving out of the laboratory and into the real world. Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human–robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human–robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots. Artificial intelligence advances have led to robots endowed with increasingly sophisticated social abilities. These machines speak to our innate desire to perceive social cues in the environment, as well as the promise of robots enhancing our daily lives. However, a strong mismatch still exists between our expectations and the reality of social robots. We argue that careful delineation of the neurocognitive mechanisms supporting human–robot interaction will enable us to gather insights critical for optimising social encounters between humans and robots. To achieve this, the field must incorporate human neuroscience tools including mobile neuroimaging to explore long-term, embodied human–robot interaction in situ. New analytical neuroimaging approaches will enable characterisation of social cognition representations on a finer scale using sensitive and appropriate categorical comparisons (human, animal, tool, or object). The future of social robotics is undeniably exciting, and insights from human neuroscience research will bring us closer to interacting and collaborating with socially sophisticated robots. Human–robot interaction (see Glossary) is a young field currently in a phase of unrest. Since the development of KISMET in the MIT Media Lab in the late 1990s, one of the first social robots, significant progress has been made towards engineering robots capable of engaging humans on a social level. Robots that respond to and trigger human emotions not only enable closer human-machine collaboration, but can also spur human users to develop long-term social bonds with these agents. While progress in developing increasingly innovative and socially capable robots has advanced considerably over the past decade or so, some have suggested that the field is approaching a social robotics winter. Referencing the period of disillusionment following escalating hype surrounding artificial intelligence [1.Natale S. Ballatore A. Imagining the thinking machine.Convergence Int. J. Res. New Media Technol. 2020; 26: 3-18Crossref Scopus (52) Google Scholar], the still-limited social repertoire of even the most advanced embodied robots calls into question the proclaimed ‘rise of the social robots’ [2.Tulli S. et al.Great expectations & aborted business initiatives: The paradox of social robot between research and industry.CEUR Workshop Proceedings. 2019; 2491: 1-10Google Scholar,3.Campa R. The rise of social robots: A review of the recent literature.J. Evol. Technol. 2016; 26: 106-113Google Scholar]. With robots failing to deliver on expectations, social interaction has been named one of the ten grand challenges the field of robotics is now facing [4.Yang G.Z. et al.The grand challenges of science robotics.Sci. Robot. 2018; 3Crossref Scopus (555) Google Scholar]. To facilitate progress toward this endeavour, the rich literature of cognitive neuroscience offers vital insights into human social behaviour, not only on a surface level, but also relating to underlying functional and biological mechanisms [5.Hortensius R. Cross E.S. From automata to animate beings: the scope and limits of attributing socialness to artificial agents: Socialness attribution and artificial agents.Ann. N. Y. Acad. Sci. 2018; 1426: 93-110Crossref Scopus (36) Google Scholar, 6.Agnieszka Wykowska et al.Embodied artificial agents for understanding human social cognition.Philos. Trans. R. Soc. B Biol. Sci. 2016; 371: 20150375Crossref PubMed Scopus (98) Google Scholar, 7.Chaminade T. Cheng G. Social cognitive neuroscience and humanoid robotics.J. Physiol. Paris. 2009; 103: 286-295Crossref PubMed Scopus (73) Google Scholar]. Both human–robot interaction researchers and neuroscientists working with robots converge in their interest in facilitating smooth and successful social encounters between robots and humans. This joint effort should ultimately enable society at large to take advantage of the often-heralded potential of robots to provide economical care, company, and coaching. In this opinion article, we argue that studying the human brain when we perceive and interact with robots will provide insights for a clearer and deeper understanding of the human side of human–robot interaction, and will thus set the stage for a social robotics spring. Our focus on the human side of these interactions, including consideration of the constraints of social cognition, serves to highlight what recent advances in human neuroscience, in terms of method and theory, can contribute to fluent human–robot encounters. The focus of the majority of past studies has been the passive perception of other agents. While this work provides a first step towards characterising social interactions, a focus on perception alone neglects the rich, complex, and dynamic nature of behaviours that unfold during social exchanges in the real world. How can social neuroscience further our understanding of not only perception but also of dynamic relationships with robots? These insights should help explain how people view and treat these artificial agents in relation to humans, pets, and other animals, tools and objects. Moreover, answers to these questions will help us to understand and support resulting societal changes in the domain of care, education, ethics, and law. In reflecting on the neurocognitive machinery that supports human–robot interactions, we suggest that focusing on representations of social cognition and how these change during actual and sustained interactions with physically present robots will be important. Moreover, we argue that minimally invasive mobile neuroimaging techniques offer exceptional promise for deepening our understanding of the human side of human-robot interaction. These methods will accelerate human–robot interaction research by incorporating social dimensions into our exchanges with these machines, thus generating crucial insights helpful in meeting the grand challenge of creating truly social robots. After all, roboticists, neuroscientists, and robots will all benefit from an improved understanding of human social cognition in an age of robots [5.Hortensius R. Cross E.S. From automata to animate beings: the scope and limits of attributing socialness to artificial agents: Socialness attribution and artificial agents.Ann. N. Y. Acad. Sci. 2018; 1426: 93-110Crossref Scopus (36) Google Scholar,7.Chaminade T. Cheng G. Social cognitive neuroscience and humanoid robotics.J. Physiol. Paris. 2009; 103: 286-295Crossref PubMed Scopus (73) Google Scholar,8.Wiese E. et al.Robots as intentional agents: Using neuroscientific methods to make robots appear more social.Front. Psychol. 2017; 8Crossref Scopus (130) Google Scholar]. Human fascination with creating a mechanical self dates back to antiquity, with writers in ancient Greece and ancient China conjuring humanlike automata to serve as workers and servants [9.Broadbent E. Interactions with robots: The truths we reveal about ourselves.Annu. Rev. Psychol. 2017; 68: 627-652Crossref PubMed Scopus (223) Google Scholar]. In the past century, the type of automaton that has most captured the human imagination (and research and development investment) is robots, with some contemporary models edging closer to the fictionalised ideals that first appeared centuries ago. Concurrent with advances in robotics technology has been the advent and rapid development of human brain imaging technology. This technology has been vital in developing our understanding of the neurocognitive mechanisms that support social behaviour among humans. More recently, the fields of human–robot interaction and neuroscience have begun to intersect, providing new vistas on social cognition during interactions with social robots, with seminal studies investigating motor resonance, action observation, joint attention, and empathy felt towards robots. These studies showcase the diversity of brain imaging modalities involved and the technical advances evident from early human–robot interaction research, and provide a starting point for neurocognitive perspectives on these interactions. One initial study in this domain [10.Gazzola V. et al.The anthropomorphic brain: The mirror neuron system responds to human and robotic actions.NeuroImage. 2007; 35: 1674-1684Crossref PubMed Scopus (498) Google Scholar] probed the flexibility of the action observation network (AON) and reported that the parts of the parietal, premotor, and middle temporal cortices ascribed to this network respond both to watching humans grasp and manipulate objects, as well as an industrial robot arm performing these same actions. These findings were corroborated by an electroencephalography (EEG) study showing mu-suppression over sensorimotor or AON regions for both robotic and human agents [11.Oberman L.M. et al.EEG evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots.Neurocomputing. 2007; 70: 2194-2203Crossref Scopus (170) Google Scholar]. Insights into motor resonance for robotic actions were further replicated and extended when researchers [12.Cross E.S. et al.Robotic movement preferentially engages the action observation network.Hum. Brain Mapp. 2012; 33: 2238-2254Crossref PubMed Scopus (113) Google Scholar] reported a series of two functional magnetic resonance imaging (fMRI) experiments that found the AON to be, in fact, more strongly engaged during observation of (unfamiliar) robotlike motion, regardless of whether a human or robotic agent performed the movement. These and other initially surprising findings (reviewed in [13.Press C. Action observation and robotic agents: Learning and anthropomorphism.Neurosci. Biobehav. Rev. 2011; 35: 1410-1418Crossref PubMed Scopus (78) Google Scholar]) have been attributed to greater modulation of the AON following greater prediction errors due to the unfamiliarity of robotic motion. While observing robotic movements engages action-related brain areas, questions remain regarding the extent to which human observers also ascribe emotions and intentions to lifeless machines. Past brain imaging studies reveal that humans do indeed show engagement of the person perception network (PPN) when observing emotional expressions as expressed by robots [14.Hortensius R. et al.The perception of emotion in artificial agents.IEEE Trans. Cogn. Dev. Syst. 2018; (Published online April 19, 2018. https://doi.org/10.1109/TCDS.2018.2826921)Crossref PubMed Scopus (67) Google Scholar] and interactions between robots and other humans [15.Wang Y. Quadflieg S. In our own image? Emotional and neural processing differences when observing human–human vs human–robot interactions.Soc. Cogn. Affect. Neurosci. 2015; 10: 1515-1524Crossref PubMed Scopus (38) Google Scholar]. The circumstances under which similar brain responses linked to empathy might emerge when observing humans and robots in simulated pain [16.Suzuki Y. et al.Measuring empathy for human and robot hand pain using electroencephalography.Sci. Rep. 2015; 5: 1-9Crossref Scopus (83) Google Scholar,17.Rosenthal-von der Pütten A.M. et al.Investigations on empathy towards humans and robots using fMRI.Comput. Hum. Behav. 2014; 33: 201-212Crossref Scopus (91) Google Scholar], or when attempting to decipher the intentions of robots [5.Hortensius R. Cross E.S. From automata to animate beings: the scope and limits of attributing socialness to artificial agents: Socialness attribution and artificial agents.Ann. N. Y. Acad. Sci. 2018; 1426: 93-110Crossref Scopus (36) Google Scholar], remain an active field of inquiry. An fMRI experiment using the gaze cueing paradigm showed behavioural and brain responses linked to mentalising, such as enhanced activation of bilateral anterior temporoparietal junction, only when people believed that another person controlled the robot [18.Özdem C. et al.Believing androids – fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents.Soc. Neurosci. 2017; 12: 582-593Crossref PubMed Scopus (36) Google Scholar]. Major strides have been made in applying advances in human neuroimaging technology to studying human–robot interactions in contexts that approximate more naturalistic social interactions. These studies further illuminate not only the flexibility and limits of human social cognition when perceiving and interacting with robots, but also some of the challenges and opportunities that roboticists face (and will continue to face) as they develop increasingly social robots. Work in this domain highlights the importance of not only stimulus cues to socialness (i.e., does the agent look and move like a human or a machine?), but also, and arguably even more importantly, how perceivers’ prior beliefs or expectations shape brain responses and behaviour [19.Klapper A. et al.The control of automatic imitation based on bottom–up and top–down cues to animacy: Insights from brain and behavior.J. Cogn. Neurosci. 2014; 26: 2503-2513Crossref PubMed Scopus (49) Google Scholar, 20.Cross Emily S. et al.The shaping of social perception by stimulus and knowledge cues to human animacy.Philos. Trans. R. Soc. B Biol. Sci. 2016; 371: 20150075Crossref PubMed Scopus (42) Google Scholar, 21.Gowen E. et al.Believe it or not: Moving non-biological stimuli believed to have human origin can be represented as human movement.Cognition. 2016; 146: 431-438Crossref PubMed Scopus (21) Google Scholar]. Neuroscientists are now also taking advantage of increasingly sophisticated and multivariate analytical approaches to more sensitively probe how the human brain represents robots compared to people (Box 1). Recent work has applied representational similarity analyses to fMRI data collected when participants viewed three agents (a human, an android, and a mechanical-looking robot) performing different actions [22.Urgen B.A. et al.Distinct representations in occipito-temporal, parietal, and premotor cortex during action perception revealed by fMRI and computational modeling.Neuropsychologia. 2019; 127: 35-47Crossref PubMed Scopus (15) Google Scholar]. Results revealed that different nodes of the AON represent distinct aspects of these actions, and these representations appear to be hierarchically arranged. Specifically, occipitotemporal regions coded for low level action features (such as form and motion integration), while parietal regions coded more abstract and semantic content, such as the action category and intention. These findings corroborate related work that examined effective connectivity between these two nodes when participants viewed actions of varying familiarity [23.Gardner T. et al.Dynamic modulation of the action observation network by movement familiarity.J. Neurosci. 2015; 35: 1561-1572Crossref PubMed Scopus (63) Google Scholar].Box 1Delineating the Neural Mechanisms of Human–Robot InteractionHow can we examine the functional and temporal changes in neural representations of social cognition during human–robot interaction? Neuroimaging techniques such as EEG and fMRI provide detailed temporal and spatial information on these changes. Traditionally, researchers have looked at relative differences in measures of neural activity during the perception of human and robotic agents. Most research used univariate analyses thereby focussing on distinct networks in the brain, such as the AON, PPN, and theory-of-mind network. This approach allows researchers to answer questions such as whether brain activation when observing a ‘happy’ robot is higher or lower compared with observing a happy human. In recent years, however, the development and employment of increasingly more detailed analyses, ranging from repetition suppression, to representational similarity analysis, to multivoxel pattern analysis, provide further and new ways to address questions regarding the overlap of neural architectures for social engagement with humans compared with robots. Repetition suppression enables mapping of potential overlap between similar or dissimilar categories, as repeated stimuli lead to deactivation of regions responsive to these stimuli. For example, does a ‘happy’ robot followed by a happy human (or vice versa) lead to reduced neural activity in a particular region of interest? The presence of repetition suppression would argue for shared neural resources underlying the processing of perceived robotic and human happiness. The critical next step is to capture the changes in the representation of social cognition during perception and interaction with social robots is the use of multivariate analyses. Representational similarity analyses can establish the similarity in neural activation during the observation of a happy or angry human and a happy- or angry-appearing robot (Figure IA). This approach can test if the neural activation represents a particular stimulus dimension. For example, does activity reflect a representation at the level of agent (activity for robots is dissimilar to humans, regardless of expression) or emotion (activity is dissimilar between happy and angry expressions, but similar across humans or robots). Lastly, a promising way to probe the extent to which perceiving and interacting with humans and robots truly share representations at the neural level is to use multivoxel pattern analyses (Figure IB). Instead of measuring magnitude changes, this technique assesses patterns of neural activity that are predictive of specific task conditions, that is, the representation of different emotions. One way to test possible shared representations is to train a classifier to distinguish the observation of a robot displaying happiness from a robot displaying anger, and to test this classifier to distinguish a human experiencing happiness from experiencing anger. If the human brain represents perceived human and robot emotions similarly, then the decision criteria of the classifier can be used to distinguish these two different categories. Together, these analytical tools provide new vistas on human social cognition during real and long-term interactions with social robots and the representation thereof. How can we examine the functional and temporal changes in neural representations of social cognition during human–robot interaction? Neuroimaging techniques such as EEG and fMRI provide detailed temporal and spatial information on these changes. Traditionally, researchers have looked at relative differences in measures of neural activity during the perception of human and robotic agents. Most research used univariate analyses thereby focussing on distinct networks in the brain, such as the AON, PPN, and theory-of-mind network. This approach allows researchers to answer questions such as whether brain activation when observing a ‘happy’ robot is higher or lower compared with observing a happy human. In recent years, however, the development and employment of increasingly more detailed analyses, ranging from repetition suppression, to representational similarity analysis, to multivoxel pattern analysis, provide further and new ways to address questions regarding the overlap of neural architectures for social engagement with humans compared with robots. Repetition suppression enables mapping of potential overlap between similar or dissimilar categories, as repeated stimuli lead to deactivation of regions responsive to these stimuli. For example, does a ‘happy’ robot followed by a happy human (or vice versa) lead to reduced neural activity in a particular region of interest? The presence of repetition suppression would argue for shared neural resources underlying the processing of perceived robotic and human happiness. The critical next step is to capture the changes in the representation of social cognition during perception and interaction with social robots is the use of multivariate analyses. Representational similarity analyses can establish the similarity in neural activation during the observation of a happy or angry human and a happy- or angry-appearing robot (Figure IA). This approach can test if the neural activation represents a particular stimulus dimension. For example, does activity reflect a representation at the level of agent (activity for robots is dissimilar to humans, regardless of expression) or emotion (activity is dissimilar between happy and angry expressions, but similar across humans or robots). Lastly, a promising way to probe the extent to which perceiving and interacting with humans and robots truly share representations at the neural level is to use multivoxel pattern analyses (Figure IB). Instead of measuring magnitude changes, this technique assesses patterns of neural activity that are predictive of specific task conditions, that is, the representation of different emotions. One way to test possible shared representations is to train a classifier to distinguish the observation of a robot displaying happiness from a robot displaying anger, and to test this classifier to distinguish a human experiencing happiness from experiencing anger. If the human brain represents perceived human and robot emotions similarly, then the decision criteria of the classifier can be used to distinguish these two different categories. Together, these analytical tools provide new vistas on human social cognition during real and long-term interactions with social robots and the representation thereof. Additional work highlights important aspects of how the human brain computes and evaluates anthropomorphism [24.Wiese E. et al.Seeing minds in others: Mind perception modulates low-level social-cognitive performance and relates to ventromedial prefrontal structures.Cogn. Affect. Behav. Neurosci. 2018; 18: 837-856Crossref PubMed Scopus (20) Google Scholar, 25.Pütten A.M.R. der et al.Neural mechanisms for accepting and rejecting artificial social partners in the uncanny valley.J. Neurosci. 2019; 39: 6555-6570Crossref PubMed Scopus (28) Google Scholar, 26.Waytz A. et al.Anthropomorphizing without social cues requires the basolateral amygdala.J. Cogn. Neurosci. 2018; 31: 482-496Crossref PubMed Scopus (5) Google Scholar]. One study has attempted to evaluate the uncanny valley hypothesis using an elegant combination of modelling behavioural ratings and functional connectivity data [25.Pütten A.M.R. der et al.Neural mechanisms for accepting and rejecting artificial social partners in the uncanny valley.J. Neurosci. 2019; 39: 6555-6570Crossref PubMed Scopus (28) Google Scholar]. The authors reported a response profile within the ventromedial prefrontal cortex that closely reflected the hypothesised, nonlinear, uncanny valley shape when viewing images of robots and humans rated more or less unsettling. Further modelling demonstrated that a distinct signal originating in the amygdala predicted when participants would reject artificial agents. This finding ties in with another recent study [26.Waytz A. et al.Anthropomorphizing without social cues requires the basolateral amygdala.J. Cogn. Neurosci. 2018; 31: 482-496Crossref PubMed Scopus (5) Google Scholar] that examined anthropomorphising behaviour among a small group of individuals with rare basolateral amygdala lesions. These individuals were able to anthropomorphise animate and living entities similarly to neurologically intact individuals, but anthropomorphised inanimate stimuli (such as a robot) less than controls. The authors suggest that the limbic system plays a key role in processing signals originating from artificial agents in a social versus non-social manner. However, mere observation of robots in one-off laboratory studies can tell us only so much about human–robot interactions. Two recent fMRI studies highlight further innovations in bringing together neuroscience, robots, and real-world interactions to advance the fields of social cognition and social robotics collectively. The first study paves the way for future social neuroscience research to incorporate unrestricted social interactions with autonomous agents while simultaneously measuring brain responses [27.Birgit Rauchbauer et al.Brain activity during reciprocal social interaction investigated using conversational robots as control condition.Philos. Trans. R. Soc. B Biol. Sci. 2019; 374: 20180033Crossref PubMed Scopus (36) Google Scholar]. The authors describe a framework that allows participants to interact with a conversational agent (a Furhat robot) or a human partner while a multimodal dataset is collected including behaviour (e.g., speech, eye gaze) and physiology (e.g., respiration, neural activity). Initial results show less engagement of specific brain regions playing a role in everyday social cognition, such as the temporoparietal junction and medial prefrontal cortex, during live human–robot interaction compared with human–human interaction [27.Birgit Rauchbauer et al.Brain activity during reciprocal social interaction investigated using conversational robots as control condition.Philos. Trans. R. Soc. B Biol. Sci. 2019; 374: 20180033Crossref PubMed Scopus (36) Google Scholar]. Another study examined the extent to which a prolonged period of time spent socialising with Cozmo, a palm-sized, playful robot, shapes empathic responses to seeing that same robot ‘in pain’ [28.Cross Emily S. et al.A neurocognitive investigation of the impact of socializing with a robot on empathy for pain.Philos. Trans. R. Soc. B Biol. Sci. 2019; 374: 20180034Crossref PubMed Scopus (20) Google Scholar]. These authors employed pre- and post-socialisation intervention fMRI sessions and measured repetition suppression within the pain matrix to determine whether a week of daily interactions with Cozmo would shift participants’ empathy toward the robot to look more like empathy for another person, based on neural activity as well as behavioural responses. While this study did not find compelling evidence that a week of socialising with this particular robot discernibly shifted empathic responses to look more humanlike [28.Cross Emily S. et al.A neurocognitive investigation of the impact of socializing with a robot on empathy for pain.Philos. Trans. R. Soc. B Biol. Sci. 2019; 374: 20180034Crossref PubMed Scopus (20) Google Scholar], this work nonetheless sets the stage for studying the impact of longer-term interactions with robots on social neurocognitive processes. This area of work is crucial if robots will indeed be taking on sustained social roles in close proximity to humans in our daily lives, and should inform robotics developers on ways to maximise social engagement not just for an hour or during an initial encounter, but over the long term. Together, the findings currently emerging from neuroscientific investigations into human–robot interactions highlight how robots are useful tools for probing core features (actions, emotions, intentions) as well as the flexibility of social cognitive processing in the human brain. While significant progress has been made, efforts to capture and characterise brain responses during live, ongoing interactions with robots remain in the very early stages. As mentioned later, this is likely to be one of the most fruitful areas for further exploration and development. However, before moving forward with real social interactions, clarification is required regarding the engagement of social cognitive brain regions. Neural responses, as measured using fMRI and EEG, when perceiving or interacting with robots differ vastly across different brain networks. Generally, activity within the PPN is not reduced when people observe social robots and other artificial agents compared with people, while activity within the theory-of-mind network is reduced [5.Hortensius R. Cross E.S. From automata to animate beings: the scope and limits of attributing socialness to artificial agents: Socialness attribution and artificial agents.Ann. N. Y. Acad. Sci. 2018; 1426: 93-110Crossref Scopus (36) Google Scholar,14.Hortensius R. et al.The perception of emotion in artificial agents.IEEE Trans. Cogn. Dev. Syst. 2018; (Published online April 19, 2018. https://doi.org/10.1109/TCDS.2018.2826921)Crossref PubMed Scopus (67) Google Scholar]. Going beyond differences in neural activation magnitude, future research in this area will be propelled by" @default.
- W3023711522 created "2020-05-13" @default.
- W3023711522 creator A5017782429 @default.
- W3023711522 creator A5047679573 @default.
- W3023711522 creator A5068193920 @default.
- W3023711522 date "2020-06-01" @default.
- W3023711522 modified "2023-10-12" @default.
- W3023711522 title "Social Cognition in the Age of Human–Robot Interaction" @default.
- W3023711522 cites W1974750895 @default.
- W3023711522 cites W1984499509 @default.
- W3023711522 cites W1988876580 @default.
- W3023711522 cites W2012511508 @default.
- W3023711522 cites W2016976020 @default.
- W3023711522 cites W2053239079 @default.
- W3023711522 cites W2061230281 @default.
- W3023711522 cites W2085876742 @default.
- W3023711522 cites W2094610151 @default.
- W3023711522 cites W2096578021 @default.
- W3023711522 cites W2102453924 @default.
- W3023711522 cites W2105824687 @default.
- W3023711522 cites W2119967897 @default.
- W3023711522 cites W2124799425 @default.
- W3023711522 cites W2134165085 @default.
- W3023711522 cites W2155624710 @default.
- W3023711522 cites W2160475489 @default.
- W3023711522 cites W2168299686 @default.
- W3023711522 cites W2190764285 @default.
- W3023711522 cites W2282380228 @default.
- W3023711522 cites W2323939159 @default.
- W3023711522 cites W2514907158 @default.
- W3023711522 cites W2521535695 @default.
- W3023711522 cites W2538217298 @default.
- W3023711522 cites W2570760970 @default.
- W3023711522 cites W2571093196 @default.
- W3023711522 cites W2606356234 @default.
- W3023711522 cites W2606772235 @default.
- W3023711522 cites W2610586781 @default.
- W3023711522 cites W2623152824 @default.
- W3023711522 cites W2741941708 @default.
- W3023711522 cites W2760992500 @default.
- W3023711522 cites W2763083925 @default.
- W3023711522 cites W2764880109 @default.
- W3023711522 cites W2787225861 @default.
- W3023711522 cites W2787690100 @default.
- W3023711522 cites W2790175389 @default.
- W3023711522 cites W2793707687 @default.
- W3023711522 cites W2797392161 @default.
- W3023711522 cites W2799638175 @default.
- W3023711522 cites W2801888051 @default.
- W3023711522 cites W2814914006 @default.
- W3023711522 cites W2816116354 @default.
- W3023711522 cites W2888056933 @default.
- W3023711522 cites W2903917562 @default.
- W3023711522 cites W2909619163 @default.
- W3023711522 cites W2912347795 @default.
- W3023711522 cites W2919701584 @default.
- W3023711522 cites W2920980830 @default.
- W3023711522 cites W2922070237 @default.
- W3023711522 cites W2922134123 @default.
- W3023711522 cites W2926117844 @default.
- W3023711522 cites W2944379815 @default.
- W3023711522 cites W2945099358 @default.
- W3023711522 cites W2965157612 @default.
- W3023711522 cites W2967614692 @default.
- W3023711522 cites W2982404315 @default.
- W3023711522 cites W3153161395 @default.
- W3023711522 cites W4211251458 @default.
- W3023711522 doi "https://doi.org/10.1016/j.tins.2020.03.013" @default.
- W3023711522 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/32362399" @default.
- W3023711522 hasPublicationYear "2020" @default.
- W3023711522 type Work @default.
- W3023711522 sameAs 3023711522 @default.
- W3023711522 citedByCount "65" @default.
- W3023711522 countsByYear W30237115222020 @default.
- W3023711522 countsByYear W30237115222021 @default.
- W3023711522 countsByYear W30237115222022 @default.
- W3023711522 countsByYear W30237115222023 @default.
- W3023711522 crossrefType "journal-article" @default.
- W3023711522 hasAuthorship W3023711522A5017782429 @default.
- W3023711522 hasAuthorship W3023711522A5047679573 @default.
- W3023711522 hasAuthorship W3023711522A5068193920 @default.
- W3023711522 hasBestOaLocation W30237115221 @default.
- W3023711522 hasConcept C138496976 @default.
- W3023711522 hasConcept C15744967 @default.
- W3023711522 hasConcept C169760540 @default.
- W3023711522 hasConcept C169900460 @default.
- W3023711522 hasConcept C180747234 @default.
- W3023711522 hasConcept C188147891 @default.
- W3023711522 hasConcept C86658582 @default.
- W3023711522 hasConceptScore W3023711522C138496976 @default.
- W3023711522 hasConceptScore W3023711522C15744967 @default.
- W3023711522 hasConceptScore W3023711522C169760540 @default.
- W3023711522 hasConceptScore W3023711522C169900460 @default.
- W3023711522 hasConceptScore W3023711522C180747234 @default.
- W3023711522 hasConceptScore W3023711522C188147891 @default.
- W3023711522 hasConceptScore W3023711522C86658582 @default.
- W3023711522 hasFunder F4320319993 @default.
- W3023711522 hasFunder F4320334678 @default.