Matches in SemOpenAlex for { <https://semopenalex.org/work/W3095024162> ?p ?o ?g. }
- W3095024162 endingPage "4026" @default.
- W3095024162 startingPage "4014" @default.
- W3095024162 abstract "Compared with single-modal content, multimodal data can express users’ feelings and sentiments more vividly and interestingly. Therefore, multimodal sentiment analysis has become a popular research topic. However, most existing methods either learn modal sentiment feature independently, without considering their correlations, or they simply integrate multimodal features. In addition, most publicly available multimodal datasets are labeled by sentiment polarities, while the emotions expressed by users are specific. Based on this observation, in this paper, we build a large-scale image-text emotion dataset (i.e., labeled by different emotions), called TumEmo, with more than <italic xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>190,000</i> instances from Tumblr. <sup xmlns:mml=http://www.w3.org/1998/Math/MathML xmlns:xlink=http://www.w3.org/1999/xlink>1</sup> We further propose a novel multimodal emotion analysis model based on the Multi-view Attentional Network (MVAN), which utilizes a memory network that is continually updated to obtain the deep semantic features of image-text. The model includes three stages: feature mapping, interactive learning, and feature fusion. In the feature mapping stage, we leverage image features from an object viewpoint and a scene viewpoint to capture effective information for multimodal emotion analysis. Then, an interactive learning mechanism is adopted that uses the memory network; this mechanism extracts single-modal emotion features and interactively models the cross-view dependencies between the image and text. In the feature fusion stage, multiple features are deeply fused using a multilayer perceptron and a stacking-pooling module. The experimental results on the MVSA-Single, MVSA-Multiple, and TumEmo datasets show that the proposed MVAN outperforms strong baseline models by large margins." @default.
- W3095024162 created "2020-11-09" @default.
- W3095024162 creator A5016199518 @default.
- W3095024162 creator A5035378456 @default.
- W3095024162 creator A5043569952 @default.
- W3095024162 creator A5057958949 @default.
- W3095024162 date "2021-01-01" @default.
- W3095024162 modified "2023-10-14" @default.
- W3095024162 title "Image-Text Multimodal Emotion Classification via Multi-View Attentional Network" @default.
- W3095024162 cites W1832693441 @default.
- W3095024162 cites W2075456404 @default.
- W3095024162 cites W2084046180 @default.
- W3095024162 cites W2110700950 @default.
- W3095024162 cites W2166706824 @default.
- W3095024162 cites W2170414372 @default.
- W3095024162 cites W2183341477 @default.
- W3095024162 cites W2250539671 @default.
- W3095024162 cites W2250966211 @default.
- W3095024162 cites W2251394420 @default.
- W3095024162 cites W2265228180 @default.
- W3095024162 cites W2293236424 @default.
- W3095024162 cites W2346975490 @default.
- W3095024162 cites W2517194566 @default.
- W3095024162 cites W2527200148 @default.
- W3095024162 cites W2584561145 @default.
- W3095024162 cites W2732026016 @default.
- W3095024162 cites W2740550900 @default.
- W3095024162 cites W2744979708 @default.
- W3095024162 cites W2753840835 @default.
- W3095024162 cites W2767484504 @default.
- W3095024162 cites W2798802604 @default.
- W3095024162 cites W2805121932 @default.
- W3095024162 cites W2810665353 @default.
- W3095024162 cites W2810884800 @default.
- W3095024162 cites W2888975113 @default.
- W3095024162 cites W2895918973 @default.
- W3095024162 cites W2908347420 @default.
- W3095024162 cites W2910191085 @default.
- W3095024162 cites W2910861656 @default.
- W3095024162 cites W2913428326 @default.
- W3095024162 cites W2923528470 @default.
- W3095024162 cites W2944443295 @default.
- W3095024162 cites W2950883513 @default.
- W3095024162 cites W2962697713 @default.
- W3095024162 cites W2963066927 @default.
- W3095024162 cites W2963702064 @default.
- W3095024162 cites W2964010806 @default.
- W3095024162 cites W2964236337 @default.
- W3095024162 cites W3104739527 @default.
- W3095024162 doi "https://doi.org/10.1109/tmm.2020.3035277" @default.
- W3095024162 hasPublicationYear "2021" @default.
- W3095024162 type Work @default.
- W3095024162 sameAs 3095024162 @default.
- W3095024162 citedByCount "42" @default.
- W3095024162 countsByYear W30950241622021 @default.
- W3095024162 countsByYear W30950241622022 @default.
- W3095024162 countsByYear W30950241622023 @default.
- W3095024162 crossrefType "journal-article" @default.
- W3095024162 hasAuthorship W3095024162A5016199518 @default.
- W3095024162 hasAuthorship W3095024162A5035378456 @default.
- W3095024162 hasAuthorship W3095024162A5043569952 @default.
- W3095024162 hasAuthorship W3095024162A5057958949 @default.
- W3095024162 hasConcept C119857082 @default.
- W3095024162 hasConcept C138885662 @default.
- W3095024162 hasConcept C153083717 @default.
- W3095024162 hasConcept C153180895 @default.
- W3095024162 hasConcept C154945302 @default.
- W3095024162 hasConcept C204321447 @default.
- W3095024162 hasConcept C2776401178 @default.
- W3095024162 hasConcept C41008148 @default.
- W3095024162 hasConcept C41895202 @default.
- W3095024162 hasConcept C66402592 @default.
- W3095024162 hasConceptScore W3095024162C119857082 @default.
- W3095024162 hasConceptScore W3095024162C138885662 @default.
- W3095024162 hasConceptScore W3095024162C153083717 @default.
- W3095024162 hasConceptScore W3095024162C153180895 @default.
- W3095024162 hasConceptScore W3095024162C154945302 @default.
- W3095024162 hasConceptScore W3095024162C204321447 @default.
- W3095024162 hasConceptScore W3095024162C2776401178 @default.
- W3095024162 hasConceptScore W3095024162C41008148 @default.
- W3095024162 hasConceptScore W3095024162C41895202 @default.
- W3095024162 hasConceptScore W3095024162C66402592 @default.
- W3095024162 hasFunder F4320321001 @default.
- W3095024162 hasFunder F4320336026 @default.
- W3095024162 hasLocation W30950241621 @default.
- W3095024162 hasOpenAccess W3095024162 @default.
- W3095024162 hasPrimaryLocation W30950241621 @default.
- W3095024162 hasRelatedWork W2016461833 @default.
- W3095024162 hasRelatedWork W2052253960 @default.
- W3095024162 hasRelatedWork W2382607599 @default.
- W3095024162 hasRelatedWork W2970216048 @default.
- W3095024162 hasRelatedWork W3192794374 @default.
- W3095024162 hasRelatedWork W3197541072 @default.
- W3095024162 hasRelatedWork W4200526184 @default.
- W3095024162 hasRelatedWork W4281608370 @default.
- W3095024162 hasRelatedWork W4285815787 @default.
- W3095024162 hasRelatedWork W4307291644 @default.
- W3095024162 hasVolume "23" @default.