Matches in SemOpenAlex for { <https://semopenalex.org/work/W4296233811> ?p ?o ?g. }
Showing items 1 to 71 of
71
with 100 items per page.
- W4296233811 abstract "Deep neural networks, especially convolutional deep neural networks, are state-of-the-art methods to classify, segment or even generate images, movies, or sounds. However, these methods lack of a good semantic understanding of what happens internally. The question, why a COVID-19 detector has classified a stack of lung-ct images as positive, is sometimes more interesting than the overall specificity and sensitivity. Especially when human domain expert knowledge disagrees with the given output. This way, human domain experts could also be advised to reconsider their choice, regarding the information pointed out by the system. In addition, the deep learning model can be controlled, and a present dataset bias can be found. Currently, most explainable AI methods in the computer vision domain are purely used on image classification, where the images are ordinary images in the visible spectrum. As a result, there is no comparison on how the methods behave with multimodal image data, as well as most methods have not been investigated on how they behave when used for object detection. This work tries to close the gaps. Firstly, investigating three saliency map generator methods on how their maps differ across the different spectra. This is achieved via accurate and systematic training. Secondly, we examine how they behave when used for object detection. As a practical problem, we chose object detection in the infrared and visual spectrum for autonomous driving. The dataset used in this work is the Multispectral Object Detection Dataset, where each scene is available in the FIR, MIR and NIR as well as visual spectrum. The results show that there are differences between the infrared and visual activation maps. Further, an advanced training with both, the infrared and visual data not only improves the network's output, it also leads to more focused spots in the saliency maps." @default.
- W4296233811 created "2022-09-18" @default.
- W4296233811 creator A5000041406 @default.
- W4296233811 creator A5030034070 @default.
- W4296233811 creator A5085416338 @default.
- W4296233811 date "2021-08-26" @default.
- W4296233811 modified "2023-09-24" @default.
- W4296233811 title "A Comparison of Deep Saliency Map Generators on Multispectral Data in Object Detection" @default.
- W4296233811 doi "https://doi.org/10.48550/arxiv.2108.11767" @default.
- W4296233811 hasPublicationYear "2021" @default.
- W4296233811 type Work @default.
- W4296233811 citedByCount "0" @default.
- W4296233811 crossrefType "posted-content" @default.
- W4296233811 hasAuthorship W4296233811A5000041406 @default.
- W4296233811 hasAuthorship W4296233811A5030034070 @default.
- W4296233811 hasAuthorship W4296233811A5085416338 @default.
- W4296233811 hasBestOaLocation W42962338111 @default.
- W4296233811 hasConcept C108583219 @default.
- W4296233811 hasConcept C115961682 @default.
- W4296233811 hasConcept C121332964 @default.
- W4296233811 hasConcept C134306372 @default.
- W4296233811 hasConcept C153180895 @default.
- W4296233811 hasConcept C154945302 @default.
- W4296233811 hasConcept C163258240 @default.
- W4296233811 hasConcept C173163844 @default.
- W4296233811 hasConcept C2776151529 @default.
- W4296233811 hasConcept C2780992000 @default.
- W4296233811 hasConcept C2781238097 @default.
- W4296233811 hasConcept C31972630 @default.
- W4296233811 hasConcept C33923547 @default.
- W4296233811 hasConcept C36503486 @default.
- W4296233811 hasConcept C41008148 @default.
- W4296233811 hasConcept C62520636 @default.
- W4296233811 hasConcept C76155785 @default.
- W4296233811 hasConcept C81363708 @default.
- W4296233811 hasConcept C94915269 @default.
- W4296233811 hasConceptScore W4296233811C108583219 @default.
- W4296233811 hasConceptScore W4296233811C115961682 @default.
- W4296233811 hasConceptScore W4296233811C121332964 @default.
- W4296233811 hasConceptScore W4296233811C134306372 @default.
- W4296233811 hasConceptScore W4296233811C153180895 @default.
- W4296233811 hasConceptScore W4296233811C154945302 @default.
- W4296233811 hasConceptScore W4296233811C163258240 @default.
- W4296233811 hasConceptScore W4296233811C173163844 @default.
- W4296233811 hasConceptScore W4296233811C2776151529 @default.
- W4296233811 hasConceptScore W4296233811C2780992000 @default.
- W4296233811 hasConceptScore W4296233811C2781238097 @default.
- W4296233811 hasConceptScore W4296233811C31972630 @default.
- W4296233811 hasConceptScore W4296233811C33923547 @default.
- W4296233811 hasConceptScore W4296233811C36503486 @default.
- W4296233811 hasConceptScore W4296233811C41008148 @default.
- W4296233811 hasConceptScore W4296233811C62520636 @default.
- W4296233811 hasConceptScore W4296233811C76155785 @default.
- W4296233811 hasConceptScore W4296233811C81363708 @default.
- W4296233811 hasConceptScore W4296233811C94915269 @default.
- W4296233811 hasLocation W42962338111 @default.
- W4296233811 hasOpenAccess W4296233811 @default.
- W4296233811 hasPrimaryLocation W42962338111 @default.
- W4296233811 hasRelatedWork W1971759388 @default.
- W4296233811 hasRelatedWork W2025800131 @default.
- W4296233811 hasRelatedWork W2095705906 @default.
- W4296233811 hasRelatedWork W2738221750 @default.
- W4296233811 hasRelatedWork W2801801420 @default.
- W4296233811 hasRelatedWork W2922421953 @default.
- W4296233811 hasRelatedWork W2970686063 @default.
- W4296233811 hasRelatedWork W2975200075 @default.
- W4296233811 hasRelatedWork W3214521593 @default.
- W4296233811 hasRelatedWork W4311401716 @default.
- W4296233811 isParatext "false" @default.
- W4296233811 isRetracted "false" @default.
- W4296233811 workType "article" @default.