Matches in SemOpenAlex for { <https://semopenalex.org/work/W2954548939> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W2954548939 endingPage "219" @default.
- W2954548939 startingPage "208" @default.
- W2954548939 abstract "Multimodal emotion understanding enables AI systems to interpret human emotions. With accelerated video surge, emotion understanding remains challenging due to inherent data ambiguity and diversity of video content. Although deep learning has made a considerable progress in big data feature learning, they are viewed as deterministic models used in a black-box manner which does not have capabilities to represent inherent ambiguities with data. Since the possibility theory of fuzzy logic focuses on knowledge representation and reasoning under uncertainty, we intend to incorporate the concepts of fuzzy logic into deep learning framework. This paper presents a novel convolutional neuro-fuzzy network, which is an integration of convolutional neural networks in fuzzy logic domain to extract high-level emotion features from text, audio, and visual modalities. The feature sets extracted by fuzzy convolutional layers are compared with those of convolutional neural networks at the same level using t-distributed Stochastic Neighbor Embedding. This paper demonstrates a multimodal emotion understanding framework with an adaptive neural fuzzy inference system that can generate new rules to classify emotions. For emotion understanding of movie clips, we concatenate audio, visual, and text features extracted using the proposed convolutional neuro-fuzzy network to train adaptive neural fuzzy inference system. In this paper, we go one step further to explain how deep learning arrives at a conclusion that can guide us to an interpretable AI. To identify which visual/text/audio aspects are important for emotion understanding, we use direct linear non-Gaussian additive model to explain the relevance in terms of causal relationships between features of deep hidden layers. The critical features extracted are input to the proposed multimodal framework to achieve higher accuracy." @default.
- W2954548939 created "2019-07-12" @default.
- W2954548939 creator A5016475473 @default.
- W2954548939 creator A5024496965 @default.
- W2954548939 creator A5079241801 @default.
- W2954548939 date "2019-10-01" @default.
- W2954548939 modified "2023-10-18" @default.
- W2954548939 title "A multimodal convolutional neuro-fuzzy network for emotion understanding of movie clips" @default.
- W2954548939 cites W1588043677 @default.
- W2954548939 cites W1900913856 @default.
- W2954548939 cites W1973270182 @default.
- W2954548939 cites W1987971958 @default.
- W2954548939 cites W2016279377 @default.
- W2954548939 cites W2018543616 @default.
- W2954548939 cites W2039487266 @default.
- W2954548939 cites W2061116763 @default.
- W2954548939 cites W2078763044 @default.
- W2954548939 cites W2168465881 @default.
- W2954548939 cites W2414501075 @default.
- W2954548939 cites W2417420127 @default.
- W2954548939 cites W2512304460 @default.
- W2954548939 cites W2742409927 @default.
- W2954548939 cites W2964184470 @default.
- W2954548939 cites W3098357269 @default.
- W2954548939 cites W3103942587 @default.
- W2954548939 cites W4205947740 @default.
- W2954548939 doi "https://doi.org/10.1016/j.neunet.2019.06.010" @default.
- W2954548939 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/31299625" @default.
- W2954548939 hasPublicationYear "2019" @default.
- W2954548939 type Work @default.
- W2954548939 sameAs 2954548939 @default.
- W2954548939 citedByCount "46" @default.
- W2954548939 countsByYear W29545489392020 @default.
- W2954548939 countsByYear W29545489392021 @default.
- W2954548939 countsByYear W29545489392022 @default.
- W2954548939 countsByYear W29545489392023 @default.
- W2954548939 crossrefType "journal-article" @default.
- W2954548939 hasAuthorship W2954548939A5016475473 @default.
- W2954548939 hasAuthorship W2954548939A5024496965 @default.
- W2954548939 hasAuthorship W2954548939A5079241801 @default.
- W2954548939 hasConcept C108583219 @default.
- W2954548939 hasConcept C119857082 @default.
- W2954548939 hasConcept C138885662 @default.
- W2954548939 hasConcept C154945302 @default.
- W2954548939 hasConcept C2776401178 @default.
- W2954548939 hasConcept C41008148 @default.
- W2954548939 hasConcept C41895202 @default.
- W2954548939 hasConcept C58166 @default.
- W2954548939 hasConcept C59404180 @default.
- W2954548939 hasConcept C81363708 @default.
- W2954548939 hasConceptScore W2954548939C108583219 @default.
- W2954548939 hasConceptScore W2954548939C119857082 @default.
- W2954548939 hasConceptScore W2954548939C138885662 @default.
- W2954548939 hasConceptScore W2954548939C154945302 @default.
- W2954548939 hasConceptScore W2954548939C2776401178 @default.
- W2954548939 hasConceptScore W2954548939C41008148 @default.
- W2954548939 hasConceptScore W2954548939C41895202 @default.
- W2954548939 hasConceptScore W2954548939C58166 @default.
- W2954548939 hasConceptScore W2954548939C59404180 @default.
- W2954548939 hasConceptScore W2954548939C81363708 @default.
- W2954548939 hasFunder F4320322120 @default.
- W2954548939 hasFunder F4320335489 @default.
- W2954548939 hasLocation W29545489391 @default.
- W2954548939 hasLocation W29545489392 @default.
- W2954548939 hasOpenAccess W2954548939 @default.
- W2954548939 hasPrimaryLocation W29545489391 @default.
- W2954548939 hasRelatedWork W2731899572 @default.
- W2954548939 hasRelatedWork W2999805992 @default.
- W2954548939 hasRelatedWork W3116150086 @default.
- W2954548939 hasRelatedWork W3133861977 @default.
- W2954548939 hasRelatedWork W4200173597 @default.
- W2954548939 hasRelatedWork W4223943233 @default.
- W2954548939 hasRelatedWork W4291897433 @default.
- W2954548939 hasRelatedWork W4312417841 @default.
- W2954548939 hasRelatedWork W4321369474 @default.
- W2954548939 hasRelatedWork W4380075502 @default.
- W2954548939 hasVolume "118" @default.
- W2954548939 isParatext "false" @default.
- W2954548939 isRetracted "false" @default.
- W2954548939 magId "2954548939" @default.
- W2954548939 workType "article" @default.