Matches in SemOpenAlex for { <https://semopenalex.org/work/W3207918457> ?p ?o ?g. }
- W3207918457 endingPage "4410" @default.
- W3207918457 startingPage "4393" @default.
- W3207918457 abstract "COVID-19 remains to proliferate precipitously in the world. It has significantly influenced public health, the world economy, and the persons’ lives. Hence, there is a need to speed up the diagnosis and precautions to deal with COVID-19 patients. With this explosion of this pandemic, there is a need for automated diagnosis tools to help specialists based on medical images. This paper presents a hybrid Convolutional Neural Network (CNN)-based classification and segmentation approach for COVID-19 detection from Computed Tomography (CT) images. The proposed approach is employed to classify and segment the COVID-19, pneumonia, and normal CT images. The classification stage is firstly applied to detect and classify the input medical CT images. Then, the segmentation stage is performed to distinguish between pneumonia and COVID-19 CT images. The classification stage is implemented based on a simple and efficient CNN deep learning model. This model comprises four Rectified Linear Units (ReLUs), four batch normalization layers, and four convolutional (Conv) layers. The Conv layer depends on filters with sizes of 64, 32, 16, and 8. A 2 × 2 window and a stride of 2 are employed in the utilized four max-pooling layers. A soft-max activation function and a Fully-Connected (FC) layer are utilized in the classification stage to perform the detection process. For the segmentation process, the Simplified Pulse Coupled Neural Network (SPCNN) is utilized in the proposed hybrid approach. The proposed segmentation approach is based on salient object detection to localize the COVID-19 or pneumonia region, accurately. To summarize the contributions of the paper, we can say that the classification process with a CNN model can be the first stage a highly-effective automated diagnosis system. Once the images are accepted by the system, it is possible to perform further processing through a segmentation process to isolate the regions of interest in the images. The region of interest can be assesses both automatically and through experts. This strategy helps so much in saving the time and efforts of specialists with the explosion of COVID-19 pandemic in the world. The proposed classification approach is applied for different scenarios of 80%, 70%, or 60% of the data for training and 20%, 30, or 40% of the data for testing, respectively. In these scenarios, the proposed approach achieves classification accuracies of 100%, 99.45%, and 98.55%, respectively. Thus, the obtained results demonstrate and prove the efficacy of the proposed approach for assisting the specialists in automated medical diagnosis services." @default.
- W3207918457 created "2021-10-25" @default.
- W3207918457 creator A5009172759 @default.
- W3207918457 creator A5028090735 @default.
- W3207918457 creator A5046897313 @default.
- W3207918457 creator A5049586476 @default.
- W3207918457 creator A5080762590 @default.
- W3207918457 date "2022-01-01" @default.
- W3207918457 modified "2023-09-26" @default.
- W3207918457 title "An Efficient CNN-Based Hybrid Classification and Segmentation Approach for COVID-19 Detection" @default.
- W3207918457 cites W1909740415 @default.
- W3207918457 cites W2294182682 @default.
- W3207918457 cites W2791485417 @default.
- W3207918457 cites W2922015385 @default.
- W3207918457 cites W2958150439 @default.
- W3207918457 cites W3004906315 @default.
- W3207918457 cites W3006110666 @default.
- W3207918457 cites W3007497549 @default.
- W3207918457 cites W3017309755 @default.
- W3207918457 cites W3024630848 @default.
- W3207918457 cites W3025576489 @default.
- W3207918457 cites W3038837241 @default.
- W3207918457 cites W3092266641 @default.
- W3207918457 cites W3094573915 @default.
- W3207918457 cites W3095044166 @default.
- W3207918457 cites W3096854301 @default.
- W3207918457 cites W3102469298 @default.
- W3207918457 cites W3104810384 @default.
- W3207918457 cites W3118159012 @default.
- W3207918457 cites W3120191671 @default.
- W3207918457 cites W3124512534 @default.
- W3207918457 cites W3132749456 @default.
- W3207918457 cites W3134286464 @default.
- W3207918457 doi "https://doi.org/10.32604/cmc.2022.020265" @default.
- W3207918457 hasPublicationYear "2022" @default.
- W3207918457 type Work @default.
- W3207918457 sameAs 3207918457 @default.
- W3207918457 citedByCount "4" @default.
- W3207918457 countsByYear W32079184572022 @default.
- W3207918457 countsByYear W32079184572023 @default.
- W3207918457 crossrefType "journal-article" @default.
- W3207918457 hasAuthorship W3207918457A5009172759 @default.
- W3207918457 hasAuthorship W3207918457A5028090735 @default.
- W3207918457 hasAuthorship W3207918457A5046897313 @default.
- W3207918457 hasAuthorship W3207918457A5049586476 @default.
- W3207918457 hasAuthorship W3207918457A5080762590 @default.
- W3207918457 hasBestOaLocation W32079184571 @default.
- W3207918457 hasConcept C108583219 @default.
- W3207918457 hasConcept C124504099 @default.
- W3207918457 hasConcept C136886441 @default.
- W3207918457 hasConcept C142724271 @default.
- W3207918457 hasConcept C144024400 @default.
- W3207918457 hasConcept C153180895 @default.
- W3207918457 hasConcept C154945302 @default.
- W3207918457 hasConcept C19165224 @default.
- W3207918457 hasConcept C2779134260 @default.
- W3207918457 hasConcept C3008058167 @default.
- W3207918457 hasConcept C31972630 @default.
- W3207918457 hasConcept C41008148 @default.
- W3207918457 hasConcept C524204448 @default.
- W3207918457 hasConcept C70437156 @default.
- W3207918457 hasConcept C71924100 @default.
- W3207918457 hasConcept C81363708 @default.
- W3207918457 hasConcept C89600930 @default.
- W3207918457 hasConceptScore W3207918457C108583219 @default.
- W3207918457 hasConceptScore W3207918457C124504099 @default.
- W3207918457 hasConceptScore W3207918457C136886441 @default.
- W3207918457 hasConceptScore W3207918457C142724271 @default.
- W3207918457 hasConceptScore W3207918457C144024400 @default.
- W3207918457 hasConceptScore W3207918457C153180895 @default.
- W3207918457 hasConceptScore W3207918457C154945302 @default.
- W3207918457 hasConceptScore W3207918457C19165224 @default.
- W3207918457 hasConceptScore W3207918457C2779134260 @default.
- W3207918457 hasConceptScore W3207918457C3008058167 @default.
- W3207918457 hasConceptScore W3207918457C31972630 @default.
- W3207918457 hasConceptScore W3207918457C41008148 @default.
- W3207918457 hasConceptScore W3207918457C524204448 @default.
- W3207918457 hasConceptScore W3207918457C70437156 @default.
- W3207918457 hasConceptScore W3207918457C71924100 @default.
- W3207918457 hasConceptScore W3207918457C81363708 @default.
- W3207918457 hasConceptScore W3207918457C89600930 @default.
- W3207918457 hasIssue "3" @default.
- W3207918457 hasLocation W32079184571 @default.
- W3207918457 hasOpenAccess W3207918457 @default.
- W3207918457 hasPrimaryLocation W32079184571 @default.
- W3207918457 hasRelatedWork W2517027266 @default.
- W3207918457 hasRelatedWork W2731899572 @default.
- W3207918457 hasRelatedWork W2921836287 @default.
- W3207918457 hasRelatedWork W2960184797 @default.
- W3207918457 hasRelatedWork W3116150086 @default.
- W3207918457 hasRelatedWork W3133861977 @default.
- W3207918457 hasRelatedWork W4200173597 @default.
- W3207918457 hasRelatedWork W4285827401 @default.
- W3207918457 hasRelatedWork W4312417841 @default.
- W3207918457 hasRelatedWork W4321369474 @default.
- W3207918457 hasVolume "70" @default.
- W3207918457 isParatext "false" @default.
- W3207918457 isRetracted "false" @default.