Matches in SemOpenAlex for { <https://semopenalex.org/work/W4367317656> ?p ?o ?g. }
Showing items 1 to 70 of
70
with 100 items per page.
- W4367317656 abstract "Artificial intelligence (AI) aims at simulating, or approximating, human intelligence in machines with the goal of reproducing, replacing or even improving brain tasks of perception, reasoning, and learning. Modern medicine is increasingly digitalised with electronic medical record systems and offers many possibilities to test and use machine learning (ML). Attractive modern applications of these methods include, e.g., AI-driven pathology or imaging interpretation or ML applied to large qualitative interview datasets or electronic medical records fields to identify themes and patterns in text data. Often the goal of utilising ML in a clinical context is to improve the predictive capacity of a model using commonly collected, readily available variables. Boulenger de Hauteclocque et al. [1] tested different ML algorithms to predict upstaging to pathological tumour stage pT3a in patients undergoing surgery for clinical tumour stage cT1/cT2a renal cell carcinoma. The best prediction model achieved an area under the receiver-operating characteristic curve of 0.77. Khene et al. [2] on behalf of the European Association of Urology-Young Academic Urologists (EAU-YAU) Renal Cancer Working Group in their letter to the Editor raise the very relevant issues of: the problem of handling missing data and imputing approaches, adjustable hyperparameters, differentially weighting input values, methods used to evaluate the predictive accuracy of the model, and questioning the clinical relevance of such a model. AI prediction models have made an amazingly rapid introduction and widespread use into clinical management [3] with often insufficient validation, e.g., the Epic Sepsis Model (ESM) widely implemented in United States hospitals and poorly predicting the onset of sepsis [4]. In a recent review of 62 studies that used AI to diagnose COVID-19 from medical scans, Roberts et al. [5] found that none of the models were ready to be deployed clinically for use in diagnosing or predicting the prognosis of COVID-19, because of flaws such as biases in the data, methodology problems, and reproducibility failures. Among the reasons for this poor predictive capacity, they found that publicly available data sets in medicine are scarce, entrenching biases and inequities, and overlap of training and testing data sets leading to often inadvertent duplication of data. From an editorial standpoint, use of AI is becoming an ever increasing challenge and the number of journal submissions on AI has skyrocketed. Kwong et al. [6] point out the current lack of standardised reporting and system explainability when applying ML and refer to the Standardised Reporting of Machine Learning Applications in Urology (STREAM-URO) framework, a concept developed based on a review of the current literature. This initiative to standardise reporting on studies using ML in urology is most welcome; however, depending on the context in which ML is applied, other guidelines and checklists should also be considered (Table 1). For general use and reporting of AI, the European Commission has issued a checklist of relevant principles for AI research, which are Fairness, Universality, Traceability, Usability, Robustness, and Explainability (FUTURE-AI, https://future-ai.eu/). These checklists aim at defining different levels of transparency of the model applied, the training and testing data sets, and how the results are interpreted, factors also relevant for the reviewing process. As editors of the BJUI, we receive a fair amount of AI/ML manuscripts. Unless authors follow reporting guidelines, our initial enthusiasm for fancy-sounding AI/ML models may be dampened if there is an apparent lack of detailed description about the model-building procedures, presentation of the final model (intercept and regression coefficients), and calibration, i.e., the degree to which the estimated model predictions match those observed on external validation (all of which are Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis [TRIPOD] criteria). Furthermore, if the manuscript reporting is very statistically complex, the reader may get lost. A ‘black box’ model outputting implausible accuracy numbers without applicability to clinical practice is a key ‘red flag’. Variable selection and discussion with the clinical members of the study team is key. For example, we have seen AI/ML models treating the ordinal variable Gleason Score as a continuous variable or age as any number between zero and infinity. Further, a critical question is whether AI/ML models tackle an important diagnostic dilemma or compares to, or improves upon, other standard statistical methods, e.g., logistic regression. And do the AI/ML models improve predictions above and beyond current standard of care and models based on already known predictors of the outcome? It is crucial that ML papers acknowledge any inherent bias in their training set and the subsequent clinical consequences of the model predictions [7]. Data-dependent variable selection in regression models (e.g., stepwise selection) also has several undesirable properties, increasing the risk of overfit and making many statistics, such as the 95% CI, questionable (see rule 5.2. in the BJUI guidelines for reporting statistics by Assel et al. [8]). Finally, it is worth mentioning that a large systematic review found no advantages of ML over traditional logistic regression in terms of predictive accuracy [9]. To fulfil the potential promise of AI for clinical use much depends on applying stringent methodology by decreasing limitations of biomedical datasets, transparent study design and model reporting (release of datasets and model coefficients, statistical code and details of how the models were trained) to allow for replication in other datasets and prove the trustworthiness of the AI model for prediction. We sincerely thank Dr Mireia Crispin-Ortuzar, Assistant Professor, Department of Oncology, Cambridge, UK for providing references to AI reporting guidelines. Sigrid V. Carlsson's work on this manuscript was supported in part by the National Institutes of Health/National Cancer Institute to Memorial Sloan Kettering Cancer Center through the Cancer Center Support Grant (awards number P30-CA008748). Sigrid V. Carlsson has received a lecture honorarium and travel reimbursement from Ipsen. All other authors have no relevant conflicts of interest to report." @default.
- W4367317656 created "2023-04-29" @default.
- W4367317656 creator A5023716783 @default.
- W4367317656 creator A5030105260 @default.
- W4367317656 creator A5081744986 @default.
- W4367317656 creator A5087481089 @default.
- W4367317656 date "2023-04-27" @default.
- W4367317656 modified "2023-10-03" @default.
- W4367317656 title "The <scp><i>BJUI</i></scp> Editorial Team's view on artificial intelligence and machine learning" @default.
- W4367317656 cites W2888109941 @default.
- W4367317656 cites W2905423686 @default.
- W4367317656 cites W2906295032 @default.
- W4367317656 cites W2913997948 @default.
- W4367317656 cites W3136933888 @default.
- W4367317656 cites W3174786846 @default.
- W4367317656 cites W3188019074 @default.
- W4367317656 cites W4316814247 @default.
- W4367317656 cites W4362506599 @default.
- W4367317656 doi "https://doi.org/10.1111/bju.16024" @default.
- W4367317656 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/37113110" @default.
- W4367317656 hasPublicationYear "2023" @default.
- W4367317656 type Work @default.
- W4367317656 citedByCount "0" @default.
- W4367317656 crossrefType "journal-article" @default.
- W4367317656 hasAuthorship W4367317656A5023716783 @default.
- W4367317656 hasAuthorship W4367317656A5030105260 @default.
- W4367317656 hasAuthorship W4367317656A5081744986 @default.
- W4367317656 hasAuthorship W4367317656A5087481089 @default.
- W4367317656 hasBestOaLocation W43673176561 @default.
- W4367317656 hasConcept C119857082 @default.
- W4367317656 hasConcept C146357865 @default.
- W4367317656 hasConcept C151730666 @default.
- W4367317656 hasConcept C154945302 @default.
- W4367317656 hasConcept C158154518 @default.
- W4367317656 hasConcept C17744445 @default.
- W4367317656 hasConcept C199539241 @default.
- W4367317656 hasConcept C2779343474 @default.
- W4367317656 hasConcept C41008148 @default.
- W4367317656 hasConcept C58471807 @default.
- W4367317656 hasConcept C71924100 @default.
- W4367317656 hasConcept C86803240 @default.
- W4367317656 hasConceptScore W4367317656C119857082 @default.
- W4367317656 hasConceptScore W4367317656C146357865 @default.
- W4367317656 hasConceptScore W4367317656C151730666 @default.
- W4367317656 hasConceptScore W4367317656C154945302 @default.
- W4367317656 hasConceptScore W4367317656C158154518 @default.
- W4367317656 hasConceptScore W4367317656C17744445 @default.
- W4367317656 hasConceptScore W4367317656C199539241 @default.
- W4367317656 hasConceptScore W4367317656C2779343474 @default.
- W4367317656 hasConceptScore W4367317656C41008148 @default.
- W4367317656 hasConceptScore W4367317656C58471807 @default.
- W4367317656 hasConceptScore W4367317656C71924100 @default.
- W4367317656 hasConceptScore W4367317656C86803240 @default.
- W4367317656 hasLocation W43673176561 @default.
- W4367317656 hasLocation W43673176562 @default.
- W4367317656 hasOpenAccess W4367317656 @default.
- W4367317656 hasPrimaryLocation W43673176561 @default.
- W4367317656 hasRelatedWork W1843462531 @default.
- W4367317656 hasRelatedWork W2356105190 @default.
- W4367317656 hasRelatedWork W2748952813 @default.
- W4367317656 hasRelatedWork W2899084033 @default.
- W4367317656 hasRelatedWork W2961085424 @default.
- W4367317656 hasRelatedWork W2994700791 @default.
- W4367317656 hasRelatedWork W3174196512 @default.
- W4367317656 hasRelatedWork W4214571255 @default.
- W4367317656 hasRelatedWork W4306674287 @default.
- W4367317656 hasRelatedWork W4224009465 @default.
- W4367317656 isParatext "false" @default.
- W4367317656 isRetracted "false" @default.
- W4367317656 workType "article" @default.