Matches in SemOpenAlex for { <https://semopenalex.org/work/W4220973827> ?p ?o ?g. }
- W4220973827 endingPage "102275" @default.
- W4220973827 startingPage "102275" @default.
- W4220973827 abstract "This paper confronts two approaches to classify bladder lesions shown in white light cystoscopy images when using small datasets: the classical one, where handcrafted-based features feed pattern recognition systems and the modern deep learning-based (DL) approach. In between, there are alternative DL models that had not received wide attention from the scientific community, even though they can be more appropriate for small datasets such as the human brain motivated capsule neural networks (CapsNets). However, CapsNets have not yet matured hence presenting lower performances than the most classic DL models. These models require higher computational resources, more computational skills from the physician and are more prone to overfitting, making them sometimes prohibitive in the routine of clinical practice. This paper shows that carefully handcrafted features used with more robust models can reach similar performances to the conventional DL-based models and deep CapsNets, making them more useful for clinical applications. Concerning feature extraction, it is proposed a new feature fusion approach for Ta and T1 bladder tumor detection by using decision fusion from multiple classifiers in a scheme known as stacking of classifiers. Three Neural Networks perform classification on three different feature sets, namely: Covariance of Color Histogram of Oriented Gradients, proposed in the ambit of this paper; Local Binary Patterns and Wavelet Coefficients taken from lower scales. Data diversity is ensured by a fourth Neural Network, which is used for decision fusion by combining the outputs of the ensemble elements to produce the classifier output. Both Feed Forward Neural Networks and Radial Basis Functions are used in the experiments. Contrarily, DL-based models extract automatically the best features at the cost of requiring huge amounts of training data, which in turn can be alleviated by using the Transfer Learning (TL) strategy. In this paper VGG16 and ResNet-34 pretrained in ImageNet were used for TL, slightly outperforming the proposed ensemble. CapsNets may overcome CNNs given their ability to deal with objects rotational invariance and spatial relationships. Therefore, they can be trained from scratch in applications using small amounts of data, which was beneficial for the current case, improving accuracy from 94.6% to 96.9%." @default.
- W4220973827 created "2022-04-03" @default.
- W4220973827 creator A5004292719 @default.
- W4220973827 creator A5009004204 @default.
- W4220973827 creator A5021130768 @default.
- W4220973827 creator A5032780378 @default.
- W4220973827 creator A5044702137 @default.
- W4220973827 creator A5059271730 @default.
- W4220973827 creator A5067247552 @default.
- W4220973827 creator A5071265374 @default.
- W4220973827 creator A5081810508 @default.
- W4220973827 creator A5091552522 @default.
- W4220973827 date "2022-04-01" @default.
- W4220973827 modified "2023-10-16" @default.
- W4220973827 title "Detection of bladder cancer with feature fusion, transfer learning and CapsNets" @default.
- W4220973827 cites W1963581687 @default.
- W4220973827 cites W1969842117 @default.
- W4220973827 cites W1973149952 @default.
- W4220973827 cites W1980201291 @default.
- W4220973827 cites W1988852552 @default.
- W4220973827 cites W2023294425 @default.
- W4220973827 cites W2039051707 @default.
- W4220973827 cites W2094056275 @default.
- W4220973827 cites W2098914003 @default.
- W4220973827 cites W2113156769 @default.
- W4220973827 cites W2125811324 @default.
- W4220973827 cites W2155212834 @default.
- W4220973827 cites W2161360637 @default.
- W4220973827 cites W2162130153 @default.
- W4220973827 cites W2165698076 @default.
- W4220973827 cites W2166982406 @default.
- W4220973827 cites W2346062110 @default.
- W4220973827 cites W2586940338 @default.
- W4220973827 cites W2592027433 @default.
- W4220973827 cites W2759647190 @default.
- W4220973827 cites W2779276742 @default.
- W4220973827 cites W2889646458 @default.
- W4220973827 cites W2919115771 @default.
- W4220973827 cites W2969983122 @default.
- W4220973827 cites W4232890450 @default.
- W4220973827 cites W4234159101 @default.
- W4220973827 doi "https://doi.org/10.1016/j.artmed.2022.102275" @default.
- W4220973827 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/35346444" @default.
- W4220973827 hasPublicationYear "2022" @default.
- W4220973827 type Work @default.
- W4220973827 citedByCount "4" @default.
- W4220973827 countsByYear W42209738272022 @default.
- W4220973827 countsByYear W42209738272023 @default.
- W4220973827 crossrefType "journal-article" @default.
- W4220973827 hasAuthorship W4220973827A5004292719 @default.
- W4220973827 hasAuthorship W4220973827A5009004204 @default.
- W4220973827 hasAuthorship W4220973827A5021130768 @default.
- W4220973827 hasAuthorship W4220973827A5032780378 @default.
- W4220973827 hasAuthorship W4220973827A5044702137 @default.
- W4220973827 hasAuthorship W4220973827A5059271730 @default.
- W4220973827 hasAuthorship W4220973827A5067247552 @default.
- W4220973827 hasAuthorship W4220973827A5071265374 @default.
- W4220973827 hasAuthorship W4220973827A5081810508 @default.
- W4220973827 hasAuthorship W4220973827A5091552522 @default.
- W4220973827 hasConcept C108583219 @default.
- W4220973827 hasConcept C119857082 @default.
- W4220973827 hasConcept C138885662 @default.
- W4220973827 hasConcept C153180895 @default.
- W4220973827 hasConcept C154945302 @default.
- W4220973827 hasConcept C22019652 @default.
- W4220973827 hasConcept C2776401178 @default.
- W4220973827 hasConcept C41008148 @default.
- W4220973827 hasConcept C41895202 @default.
- W4220973827 hasConcept C50644808 @default.
- W4220973827 hasConcept C52622490 @default.
- W4220973827 hasConceptScore W4220973827C108583219 @default.
- W4220973827 hasConceptScore W4220973827C119857082 @default.
- W4220973827 hasConceptScore W4220973827C138885662 @default.
- W4220973827 hasConceptScore W4220973827C153180895 @default.
- W4220973827 hasConceptScore W4220973827C154945302 @default.
- W4220973827 hasConceptScore W4220973827C22019652 @default.
- W4220973827 hasConceptScore W4220973827C2776401178 @default.
- W4220973827 hasConceptScore W4220973827C41008148 @default.
- W4220973827 hasConceptScore W4220973827C41895202 @default.
- W4220973827 hasConceptScore W4220973827C50644808 @default.
- W4220973827 hasConceptScore W4220973827C52622490 @default.
- W4220973827 hasFunder F4320334779 @default.
- W4220973827 hasLocation W42209738271 @default.
- W4220973827 hasLocation W42209738272 @default.
- W4220973827 hasOpenAccess W4220973827 @default.
- W4220973827 hasPrimaryLocation W42209738271 @default.
- W4220973827 hasRelatedWork W1492295194 @default.
- W4220973827 hasRelatedWork W1574414179 @default.
- W4220973827 hasRelatedWork W2490526372 @default.
- W4220973827 hasRelatedWork W3099765033 @default.
- W4220973827 hasRelatedWork W4221142204 @default.
- W4220973827 hasRelatedWork W4281702477 @default.
- W4220973827 hasRelatedWork W4297676672 @default.
- W4220973827 hasRelatedWork W4362597605 @default.
- W4220973827 hasRelatedWork W4376166922 @default.
- W4220973827 hasRelatedWork W4378510483 @default.
- W4220973827 hasVolume "126" @default.
- W4220973827 isParatext "false" @default.