Matches in SemOpenAlex for { <https://semopenalex.org/work/W4384071489> ?p ?o ?g. }
- W4384071489 abstract "<sec> <title>BACKGROUND</title> Ovarian cancer is the most lethal gynecological malignancy, with a low five-year survival rate of 46%[1]. Owning to the lack of effective screening, 70% of ovarian cancer patients are already at an advanced stage when the disease is detected, with only 28% five-year survival rate. Meanwhile, if ovarian cancer is found at an early stage, the survival rate can be improved to 90%[2, 3]. The treatment strategies and prognoses for benign and malignant ovarian tumors vary greatly. Ultrasound (US) is a safe, cost-effective method for identifying ovarian tumors. It is the preferred choice for identifying adnexal masses[9]. But its accuracy can be affected by the complexity of ovarian tissue and doctors' subjectivity, leading to misdiagnosis[10, 11]. According to the existing research, effectively diagnosing ovarian cancer is still an issue[2]. Artificial Intelligence (AI) has shown great potential in the medical field, assisting in diagnosing various cancers, including ovarian cancer[12]. Deeping learning (DL) is a branch of AI that can automatically learn mid and high-level abstract features from raw data such as the US, mammograms, and MRI, thereby replacing the traditional time-consuming manual extraction method[13]. Although prior work has made significant progress, it has been limited to single-modality US images. In clinical practice, the diagnosis of ovarian cancer should include imaging methods, serum tumor markers, and patient age. For example, the FDA recommends using CA125 to assess treatment response and to monitor residual disease or recurrent hazards after first-line treatment. The link between CA125 clinical stage and survival rate is also reported by serval studies[18-20]. HE4 is a promising ovarian cancer biomarker with a higher specificity for diagnosing ovarian cancer than CA125[21-23]. Thus, these markers can be combined to screen for ovarian cancer[3]. Therefore, to improve the US diagnostic accuracy of ovarian masses. Our work aimed to build a Clincoradiological model to improve AI diagnostics of benign and malignant ovarian tumors. </sec> <sec> <title>OBJECTIVE</title> To evaluate the diagnostic performance of a Clincoradiological deep learning (DL) model for differential diagnosis of malignant and benign ovarian masses. </sec> <sec> <title>METHODS</title> This retrospective study included 1054 patients with ultrasound-detected ovarian tumors in Shenzhen People’s Hospital from January 2015 to March 2022. Among them, 699 patients were benign, and 355 patients were malignant. All patients were randomly divided into training (n=675), validation (n=169), and testing (n=210) sets. The model was developed using ResNet-50. Three deep learning-based models were proposed to perform benign-malignant classification tasks on these lesions, including a single-modality model which only utilized US images, a dual-modality model that used both US images and menopausal status as inputs, and a Clincoradiological model that integrated US images, menopausal status and serum indicators (Carbohydrate antigen 125(CA125)and serum Human Epididymis Protein 4 (HE4)). After 5-fold cross-validation, the test was performed on 210 lesions. The area under the curve (AUC), accuracy, sensitivity, and specificity were used as primary evaluation metrics to evaluate the performance of three models. </sec> <sec> <title>RESULTS</title> In the test set, the diagnostic accuracy and AUC of the single-modality model were 90.95% and 0.957. After combing menopausal status, the accuracy and AUC of the dual-modality model reached 92.38% and 0.968. The diagnostic performance of the Clincoradiological model was significantly improved, with 93.80% accuracy and 0.983 AUC, achieving the best performance. </sec> <sec> <title>CONCLUSIONS</title> Clincoradiological DL model has excellent performance in distinguishing benign and malignant ovarian tumors, it outperforms the single- and dual-modality models. </sec>" @default.
- W4384071489 created "2023-07-13" @default.
- W4384071489 creator A5004138101 @default.
- W4384071489 creator A5005495118 @default.
- W4384071489 creator A5017431053 @default.
- W4384071489 creator A5020583701 @default.
- W4384071489 creator A5022631295 @default.
- W4384071489 creator A5048379858 @default.
- W4384071489 creator A5049692788 @default.
- W4384071489 creator A5054984969 @default.
- W4384071489 creator A5055989083 @default.
- W4384071489 creator A5068284208 @default.
- W4384071489 creator A5087568262 @default.
- W4384071489 creator A5088465882 @default.
- W4384071489 creator A5089556373 @default.
- W4384071489 creator A5090654394 @default.
- W4384071489 date "2023-07-06" @default.
- W4384071489 modified "2023-10-18" @default.
- W4384071489 title "Clincoradiological deep learning model reaches high prediction accuracy in the diagnosis of ovarian cancer (Preprint)" @default.
- W4384071489 cites W1973038660 @default.
- W4384071489 cites W2101886357 @default.
- W4384071489 cites W2167632090 @default.
- W4384071489 cites W2328176404 @default.
- W4384071489 cites W2337524813 @default.
- W4384071489 cites W2528072415 @default.
- W4384071489 cites W2737046398 @default.
- W4384071489 cites W2752174804 @default.
- W4384071489 cites W2888098417 @default.
- W4384071489 cites W2900795546 @default.
- W4384071489 cites W2913847982 @default.
- W4384071489 cites W2919115771 @default.
- W4384071489 cites W2942092087 @default.
- W4384071489 cites W2945357020 @default.
- W4384071489 cites W2962858109 @default.
- W4384071489 cites W3006377099 @default.
- W4384071489 cites W3013056581 @default.
- W4384071489 cites W3081018594 @default.
- W4384071489 cites W3092126195 @default.
- W4384071489 cites W3096898300 @default.
- W4384071489 cites W3121118670 @default.
- W4384071489 cites W3201437718 @default.
- W4384071489 cites W4200533030 @default.
- W4384071489 cites W4205744506 @default.
- W4384071489 cites W4213031487 @default.
- W4384071489 cites W4214928132 @default.
- W4384071489 cites W4223544627 @default.
- W4384071489 cites W4253734173 @default.
- W4384071489 doi "https://doi.org/10.2196/preprints.50499" @default.
- W4384071489 hasPublicationYear "2023" @default.
- W4384071489 type Work @default.
- W4384071489 citedByCount "0" @default.
- W4384071489 crossrefType "posted-content" @default.
- W4384071489 hasAuthorship W4384071489A5004138101 @default.
- W4384071489 hasAuthorship W4384071489A5005495118 @default.
- W4384071489 hasAuthorship W4384071489A5017431053 @default.
- W4384071489 hasAuthorship W4384071489A5020583701 @default.
- W4384071489 hasAuthorship W4384071489A5022631295 @default.
- W4384071489 hasAuthorship W4384071489A5048379858 @default.
- W4384071489 hasAuthorship W4384071489A5049692788 @default.
- W4384071489 hasAuthorship W4384071489A5054984969 @default.
- W4384071489 hasAuthorship W4384071489A5055989083 @default.
- W4384071489 hasAuthorship W4384071489A5068284208 @default.
- W4384071489 hasAuthorship W4384071489A5087568262 @default.
- W4384071489 hasAuthorship W4384071489A5088465882 @default.
- W4384071489 hasAuthorship W4384071489A5089556373 @default.
- W4384071489 hasAuthorship W4384071489A5090654394 @default.
- W4384071489 hasConcept C121608353 @default.
- W4384071489 hasConcept C126322002 @default.
- W4384071489 hasConcept C126838900 @default.
- W4384071489 hasConcept C143998085 @default.
- W4384071489 hasConcept C146357865 @default.
- W4384071489 hasConcept C151730666 @default.
- W4384071489 hasConcept C2779134260 @default.
- W4384071489 hasConcept C2780427987 @default.
- W4384071489 hasConcept C71924100 @default.
- W4384071489 hasConcept C86803240 @default.
- W4384071489 hasConceptScore W4384071489C121608353 @default.
- W4384071489 hasConceptScore W4384071489C126322002 @default.
- W4384071489 hasConceptScore W4384071489C126838900 @default.
- W4384071489 hasConceptScore W4384071489C143998085 @default.
- W4384071489 hasConceptScore W4384071489C146357865 @default.
- W4384071489 hasConceptScore W4384071489C151730666 @default.
- W4384071489 hasConceptScore W4384071489C2779134260 @default.
- W4384071489 hasConceptScore W4384071489C2780427987 @default.
- W4384071489 hasConceptScore W4384071489C71924100 @default.
- W4384071489 hasConceptScore W4384071489C86803240 @default.
- W4384071489 hasLocation W43840714891 @default.
- W4384071489 hasOpenAccess W4384071489 @default.
- W4384071489 hasPrimaryLocation W43840714891 @default.
- W4384071489 hasRelatedWork W2049214470 @default.
- W4384071489 hasRelatedWork W2064014472 @default.
- W4384071489 hasRelatedWork W2352919539 @default.
- W4384071489 hasRelatedWork W2356105190 @default.
- W4384071489 hasRelatedWork W2418206157 @default.
- W4384071489 hasRelatedWork W2902148150 @default.
- W4384071489 hasRelatedWork W4240376378 @default.
- W4384071489 hasRelatedWork W4243329694 @default.
- W4384071489 hasRelatedWork W2120500774 @default.
- W4384071489 hasRelatedWork W4287415335 @default.
- W4384071489 isParatext "false" @default.