Matches in SemOpenAlex for { <https://semopenalex.org/work/W2938286792> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W2938286792 endingPage "498" @default.
- W2938286792 startingPage "495" @default.
- W2938286792 abstract "Artificial intelligence (AI) has enormous potential to improve the safety of healthcare, from increasing diagnostic accuracy,1 to optimising treatment planning,2 to forecasting outcomes of care.3 However, integrating AI technologies into the delivery of healthcare is likely to introduce a range of new risks and amplify existing ones. For instance, failures in widely used software have the potential to quickly affect large numbers of patients4; hidden assumptions in underlying data and models can lead to AI systems delivering dangerous recommendations that are insensitive to local care processes,5 6 and opaque AI techniques such as deep learning can make explaining and learning from failure extremely difficult.7 8 To maximise the benefits of AI in healthcare and to build trust among patients and practitioners, it will therefore be essential to robustly govern the risks that AI poses to patient safety.In a recent review in this journal, Challen and colleagues present an important and timely analysis of some of the key technological risks associated with the application of machine learning in clinical settings.9 Machine learning is a subfield of AI that focuses on the development of algorithms that are automatically derived and optimised through exposure to large quantities of exemplar ‘training’ data.10 The outputs of machine learning algorithms are essentially classifications of patterns that provide some sort of prediction—for instance, predicting whether an image shows a malignant melanoma or a benign mole.11 Some of the basic techniques of machine learning have existed for half a century or more, but progress in the field has accelerated rapidly due to advances in the development of ‘deep’ artificial neural networks12 combined with huge increases in computational power and the availability of enormous quantities of data. These techniques have underpinned recent public demonstrations of AI systems …" @default.
- W2938286792 created "2019-04-25" @default.
- W2938286792 creator A5033494708 @default.
- W2938286792 date "2019-04-12" @default.
- W2938286792 modified "2023-09-30" @default.
- W2938286792 title "Governing the safety of artificial intelligence in healthcare" @default.
- W2938286792 cites W1977474759 @default.
- W2938286792 cites W2049611483 @default.
- W2938286792 cites W2204197063 @default.
- W2938286792 cites W2531099434 @default.
- W2938286792 cites W2581082771 @default.
- W2938286792 cites W2607757716 @default.
- W2938286792 cites W2612890464 @default.
- W2938286792 cites W2738975713 @default.
- W2938286792 cites W2767142522 @default.
- W2938286792 cites W2802629520 @default.
- W2938286792 cites W2802878560 @default.
- W2938286792 cites W2895083984 @default.
- W2938286792 cites W2897178686 @default.
- W2938286792 cites W2897439270 @default.
- W2938286792 cites W2899560923 @default.
- W2938286792 cites W2905810301 @default.
- W2938286792 cites W2908201961 @default.
- W2938286792 cites W2910707576 @default.
- W2938286792 cites W2921861522 @default.
- W2938286792 cites W3122548859 @default.
- W2938286792 cites W4247188100 @default.
- W2938286792 doi "https://doi.org/10.1136/bmjqs-2019-009484" @default.
- W2938286792 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/30979783" @default.
- W2938286792 hasPublicationYear "2019" @default.
- W2938286792 type Work @default.
- W2938286792 sameAs 2938286792 @default.
- W2938286792 citedByCount "63" @default.
- W2938286792 countsByYear W29382867922019 @default.
- W2938286792 countsByYear W29382867922020 @default.
- W2938286792 countsByYear W29382867922021 @default.
- W2938286792 countsByYear W29382867922022 @default.
- W2938286792 countsByYear W29382867922023 @default.
- W2938286792 crossrefType "journal-article" @default.
- W2938286792 hasAuthorship W2938286792A5033494708 @default.
- W2938286792 hasConcept C159110408 @default.
- W2938286792 hasConcept C160735492 @default.
- W2938286792 hasConcept C162324750 @default.
- W2938286792 hasConcept C2522767166 @default.
- W2938286792 hasConcept C2779328685 @default.
- W2938286792 hasConcept C41008148 @default.
- W2938286792 hasConcept C50522688 @default.
- W2938286792 hasConcept C545542383 @default.
- W2938286792 hasConcept C56739046 @default.
- W2938286792 hasConcept C71924100 @default.
- W2938286792 hasConceptScore W2938286792C159110408 @default.
- W2938286792 hasConceptScore W2938286792C160735492 @default.
- W2938286792 hasConceptScore W2938286792C162324750 @default.
- W2938286792 hasConceptScore W2938286792C2522767166 @default.
- W2938286792 hasConceptScore W2938286792C2779328685 @default.
- W2938286792 hasConceptScore W2938286792C41008148 @default.
- W2938286792 hasConceptScore W2938286792C50522688 @default.
- W2938286792 hasConceptScore W2938286792C545542383 @default.
- W2938286792 hasConceptScore W2938286792C56739046 @default.
- W2938286792 hasConceptScore W2938286792C71924100 @default.
- W2938286792 hasFunder F4320307874 @default.
- W2938286792 hasIssue "6" @default.
- W2938286792 hasLocation W29382867921 @default.
- W2938286792 hasLocation W29382867922 @default.
- W2938286792 hasOpenAccess W2938286792 @default.
- W2938286792 hasPrimaryLocation W29382867921 @default.
- W2938286792 hasRelatedWork W1571230209 @default.
- W2938286792 hasRelatedWork W2024782445 @default.
- W2938286792 hasRelatedWork W2109860997 @default.
- W2938286792 hasRelatedWork W2157326779 @default.
- W2938286792 hasRelatedWork W2161870314 @default.
- W2938286792 hasRelatedWork W2178323067 @default.
- W2938286792 hasRelatedWork W2336905968 @default.
- W2938286792 hasRelatedWork W3012178317 @default.
- W2938286792 hasRelatedWork W3203900099 @default.
- W2938286792 hasRelatedWork W3207957489 @default.
- W2938286792 hasVolume "28" @default.
- W2938286792 isParatext "false" @default.
- W2938286792 isRetracted "false" @default.
- W2938286792 magId "2938286792" @default.
- W2938286792 workType "article" @default.