Matches in SemOpenAlex for { <https://semopenalex.org/work/W4309892741> ?p ?o ?g. }
- W4309892741 endingPage "16" @default.
- W4309892741 startingPage "1" @default.
- W4309892741 abstract "COVID-19 had caused the whole world to come to a standstill. The current detection methods are time consuming as well as costly. Using Chest X-rays (CXRs) is a solution to this problem, however, manual examination of CXRs is a cumbersome and difficult process needing specialization in the domain. Most of existing methods used for this application involve the usage of pretrained models such as VGG19, ResNet, DenseNet, Xception, and EfficeintNet which were trained on RGB image datasets. X-rays are fundamentally single channel images, hence using RGB trained model is not appropriate since it increases the operations by involving three channels instead of one. A way of using pretrained model for grayscale images is by replicating the one channel image data to three channel which introduces redundancy and another way is by altering the input layer of pretrained model to take in one channel image data, which comprises the weights in the forward layers that were trained on three channel images which weakens the use of pre-trained weights in a transfer learning approach. A novel approach for identification of COVID-19 using CXRs, Contrast Limited Adaptive Histogram Equalization (CLAHE) along with Homomorphic Transformation Filter which is used to process the pixel data in images and extract features from the CXRs is suggested in this paper. These processed images are then provided as input to a VGG inspired deep Convolutional Neural Network (CNN) model which takes one channel image data as input (grayscale images) to categorize CXRs into three class labels, namely, No-Findings, COVID-19, and Pneumonia. Evaluation of the suggested model is done with the help of two publicly available datasets; one to obtain COVID-19 and No-Finding images and the other to obtain Pneumonia CXRs. The dataset comprises 6750 images in total; 2250 images for each class. Results obtained show that the model has achieved 96.56% for multi-class classification and 98.06% accuracy for binary classification using 5-fold stratified cross validation (CV) method. This result is competitive and up to the mark when compared with the performance shown by existing approaches for COVID-19 classification." @default.
- W4309892741 created "2022-11-29" @default.
- W4309892741 creator A5044931988 @default.
- W4309892741 creator A5058370392 @default.
- W4309892741 creator A5060339658 @default.
- W4309892741 creator A5066513950 @default.
- W4309892741 date "2023-01-01" @default.
- W4309892741 modified "2023-10-18" @default.
- W4309892741 title "COVID-19 detection on chest X-ray images using Homomorphic Transformation and VGG inspired deep convolutional neural network" @default.
- W4309892741 cites W1984020445 @default.
- W4309892741 cites W2099168648 @default.
- W4309892741 cites W2731899572 @default.
- W4309892741 cites W2744139632 @default.
- W4309892741 cites W2914298543 @default.
- W4309892741 cites W3009800457 @default.
- W4309892741 cites W3010702679 @default.
- W4309892741 cites W3013277995 @default.
- W4309892741 cites W3013601031 @default.
- W4309892741 cites W3013727805 @default.
- W4309892741 cites W3017730472 @default.
- W4309892741 cites W3017855299 @default.
- W4309892741 cites W3021160418 @default.
- W4309892741 cites W3023564401 @default.
- W4309892741 cites W3034066992 @default.
- W4309892741 cites W3038197756 @default.
- W4309892741 cites W3039137888 @default.
- W4309892741 cites W3041463877 @default.
- W4309892741 cites W3045608202 @default.
- W4309892741 cites W3048670851 @default.
- W4309892741 cites W3082245570 @default.
- W4309892741 cites W3086039674 @default.
- W4309892741 cites W3089346069 @default.
- W4309892741 cites W3093354398 @default.
- W4309892741 cites W3105081694 @default.
- W4309892741 cites W3119875393 @default.
- W4309892741 cites W3120327591 @default.
- W4309892741 cites W3122272388 @default.
- W4309892741 cites W3124512534 @default.
- W4309892741 cites W3135057764 @default.
- W4309892741 cites W3137494889 @default.
- W4309892741 cites W3138985726 @default.
- W4309892741 cites W3158225068 @default.
- W4309892741 cites W3161572961 @default.
- W4309892741 cites W3162351260 @default.
- W4309892741 cites W3178226228 @default.
- W4309892741 cites W3191257087 @default.
- W4309892741 cites W3193830270 @default.
- W4309892741 cites W3202799525 @default.
- W4309892741 cites W3205089560 @default.
- W4309892741 cites W3208413984 @default.
- W4309892741 cites W3209893517 @default.
- W4309892741 cites W3210695447 @default.
- W4309892741 cites W3211206889 @default.
- W4309892741 cites W3211983116 @default.
- W4309892741 cites W3214767560 @default.
- W4309892741 cites W4200113603 @default.
- W4309892741 cites W4206608885 @default.
- W4309892741 cites W4206946787 @default.
- W4309892741 cites W4211257194 @default.
- W4309892741 cites W4221130444 @default.
- W4309892741 cites W4225137323 @default.
- W4309892741 cites W4225306981 @default.
- W4309892741 cites W4281661630 @default.
- W4309892741 cites W4283754341 @default.
- W4309892741 cites W4284974471 @default.
- W4309892741 cites W4286634485 @default.
- W4309892741 cites W4289856205 @default.
- W4309892741 cites W4289886863 @default.
- W4309892741 doi "https://doi.org/10.1016/j.bbe.2022.11.003" @default.
- W4309892741 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/36447948" @default.
- W4309892741 hasPublicationYear "2023" @default.
- W4309892741 type Work @default.
- W4309892741 citedByCount "10" @default.
- W4309892741 countsByYear W43098927412023 @default.
- W4309892741 crossrefType "journal-article" @default.
- W4309892741 hasAuthorship W4309892741A5044931988 @default.
- W4309892741 hasAuthorship W4309892741A5058370392 @default.
- W4309892741 hasAuthorship W4309892741A5060339658 @default.
- W4309892741 hasAuthorship W4309892741A5066513950 @default.
- W4309892741 hasBestOaLocation W43098927411 @default.
- W4309892741 hasConcept C108583219 @default.
- W4309892741 hasConcept C115961682 @default.
- W4309892741 hasConcept C127162648 @default.
- W4309892741 hasConcept C136943445 @default.
- W4309892741 hasConcept C153180895 @default.
- W4309892741 hasConcept C154945302 @default.
- W4309892741 hasConcept C160633673 @default.
- W4309892741 hasConcept C191178318 @default.
- W4309892741 hasConcept C30387639 @default.
- W4309892741 hasConcept C31258907 @default.
- W4309892741 hasConcept C31972630 @default.
- W4309892741 hasConcept C41008148 @default.
- W4309892741 hasConcept C53533937 @default.
- W4309892741 hasConcept C78201319 @default.
- W4309892741 hasConcept C81363708 @default.
- W4309892741 hasConcept C82990744 @default.
- W4309892741 hasConceptScore W4309892741C108583219 @default.
- W4309892741 hasConceptScore W4309892741C115961682 @default.