Matches in SemOpenAlex for { <https://semopenalex.org/work/W3132130152> ?p ?o ?g. }
- W3132130152 abstract "Abstract The recent release of large-scale healthcare datasets has greatly propelled the research of data-driven deep learning models for healthcare applications. However, due to the nature of such deep black-boxed models, concerns about interpretability, fairness, and biases in healthcare scenarios where human lives are at stake call for a careful and thorough examination of both datasets and models. In this work, we focus on MIMIC-IV (Medical Information Mart for Intensive Care, version IV), the largest publicly available healthcare dataset, and conduct comprehensive analyses of dataset representation bias as well as interpretability and prediction fairness of deep learning models for in-hospital mortality prediction. In terms of interpretability, we observe that (1) the best-performing interpretability method successfully identifies critical features for mortality prediction on various prediction models; (2) demographic features are important for prediction. In terms of fairness, we observe that (1) there exists disparate treatment in prescribing mechanical ventilation among patient groups across ethnicity, gender and age; (2) all of the studied mortality predictors are generally fair while the IMV-LSTM (Interpretable Multi-Variable Long Short-Term Memory) model provides the most accurate and unbiased predictions across all protected groups. We further draw concrete connections between interpretability methods and fairness metrics by showing how feature importance from interpretability methods can be beneficial in quantifying potential disparities in mortality predictors." @default.
- W3132130152 created "2021-03-01" @default.
- W3132130152 creator A5021293751 @default.
- W3132130152 creator A5042142024 @default.
- W3132130152 creator A5042420065 @default.
- W3132130152 creator A5053039371 @default.
- W3132130152 date "2021-04-14" @default.
- W3132130152 modified "2023-10-16" @default.
- W3132130152 title "MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset" @default.
- W3132130152 cites W1836090061 @default.
- W3132130152 cites W1849277567 @default.
- W3132130152 cites W2064675550 @default.
- W3132130152 cites W2073231946 @default.
- W3132130152 cites W2116984840 @default.
- W3132130152 cites W2150480892 @default.
- W3132130152 cites W2155279768 @default.
- W3132130152 cites W2162800060 @default.
- W3132130152 cites W2240067561 @default.
- W3132130152 cites W2522104760 @default.
- W3132130152 cites W2530395818 @default.
- W3132130152 cites W2550925836 @default.
- W3132130152 cites W2563486500 @default.
- W3132130152 cites W2592895748 @default.
- W3132130152 cites W2594475271 @default.
- W3132130152 cites W2594633041 @default.
- W3132130152 cites W2605409611 @default.
- W3132130152 cites W2616901848 @default.
- W3132130152 cites W2626639386 @default.
- W3132130152 cites W2763387459 @default.
- W3132130152 cites W2785011159 @default.
- W3132130152 cites W2785760873 @default.
- W3132130152 cites W2792764867 @default.
- W3132130152 cites W2805390961 @default.
- W3132130152 cites W2806031239 @default.
- W3132130152 cites W2810290439 @default.
- W3132130152 cites W2883201963 @default.
- W3132130152 cites W2885659818 @default.
- W3132130152 cites W2890966191 @default.
- W3132130152 cites W2895471314 @default.
- W3132130152 cites W2895739182 @default.
- W3132130152 cites W2914514892 @default.
- W3132130152 cites W2934842096 @default.
- W3132130152 cites W2952160759 @default.
- W3132130152 cites W2952613481 @default.
- W3132130152 cites W2962059918 @default.
- W3132130152 cites W2962790223 @default.
- W3132130152 cites W2962851944 @default.
- W3132130152 cites W2962862931 @default.
- W3132130152 cites W2963382180 @default.
- W3132130152 cites W2963403868 @default.
- W3132130152 cites W2963424533 @default.
- W3132130152 cites W2963483561 @default.
- W3132130152 cites W2963926704 @default.
- W3132130152 cites W2964089344 @default.
- W3132130152 cites W2964240233 @default.
- W3132130152 cites W2964256806 @default.
- W3132130152 cites W2970030610 @default.
- W3132130152 cites W2970447476 @default.
- W3132130152 cites W2984656972 @default.
- W3132130152 cites W3005535506 @default.
- W3132130152 cites W3011762034 @default.
- W3132130152 cites W3011865636 @default.
- W3132130152 cites W3012970654 @default.
- W3132130152 cites W3021606843 @default.
- W3132130152 cites W3030406438 @default.
- W3132130152 cites W3034522874 @default.
- W3132130152 cites W3034879135 @default.
- W3132130152 cites W3035291455 @default.
- W3132130152 cites W3035447285 @default.
- W3132130152 cites W3097248747 @default.
- W3132130152 cites W3098173972 @default.
- W3132130152 cites W3098538463 @default.
- W3132130152 cites W3099540969 @default.
- W3132130152 cites W3099803834 @default.
- W3132130152 cites W3100058991 @default.
- W3132130152 cites W3101704389 @default.
- W3132130152 cites W3101973032 @default.
- W3132130152 cites W3102419640 @default.
- W3132130152 cites W3104997604 @default.
- W3132130152 cites W3107969206 @default.
- W3132130152 cites W3128452405 @default.
- W3132130152 cites W3135750632 @default.
- W3132130152 cites W3172745702 @default.
- W3132130152 cites W3181414820 @default.
- W3132130152 cites W3211941956 @default.
- W3132130152 doi "https://doi.org/10.21203/rs.3.rs-402058/v1" @default.
- W3132130152 hasPublicationYear "2021" @default.
- W3132130152 type Work @default.
- W3132130152 sameAs 3132130152 @default.
- W3132130152 citedByCount "11" @default.
- W3132130152 countsByYear W31321301522021 @default.
- W3132130152 countsByYear W31321301522022 @default.
- W3132130152 countsByYear W31321301522023 @default.
- W3132130152 crossrefType "posted-content" @default.
- W3132130152 hasAuthorship W3132130152A5021293751 @default.
- W3132130152 hasAuthorship W3132130152A5042142024 @default.
- W3132130152 hasAuthorship W3132130152A5042420065 @default.
- W3132130152 hasAuthorship W3132130152A5053039371 @default.
- W3132130152 hasBestOaLocation W31321301521 @default.
- W3132130152 hasConcept C108583219 @default.