Matches in SemOpenAlex for { <https://semopenalex.org/work/W3203790036> ?p ?o ?g. }
- W3203790036 endingPage "12" @default.
- W3203790036 startingPage "1" @default.
- W3203790036 abstract "Predicting the emotions evoked in a viewer watching movies is an important research element in affective video content analysis over a wide range of applications. Generally, the emotion of the audience is evoked by the combined effect of the audio-visual messages of the movies. Current research has mainly used rough middle- and high-level audio and visual features to predict experienced emotions, but combining semantic information to refine features to improve emotion prediction results is still not well studied. Therefore, on the premise of considering the time structure and semantic units of a movie, this paper proposes a shot-based audio-visual feature representation method and a long short-term memory (LSTM) model incorporating a temporal attention mechanism for experienced emotion prediction. First, the shot-based audio-visual feature representation defines a method for extracting and combining audio and visual features of each shot clip, and the advanced pretraining models in the related audio-visual tasks are used to extract the audio and visual features with different semantic levels. Then, four components are included in the prediction model: a nonlinear multimodal feature fusion layer, a temporal feature capture layer, a temporal attention layer, and a sentiment prediction layer. This paper focuses on experienced emotion prediction and evaluates the proposed method on the extended COGNIMUSE dataset. The method performs significantly better than the state-of-the-art while significantly reducing the number of calculations, with increases in the Pearson correlation coefficient (PCC) from 0.46 to 0.62 for arousal and from 0.18 to 0.34 for valence in experienced emotion." @default.
- W3203790036 created "2021-10-11" @default.
- W3203790036 creator A5009301758 @default.
- W3203790036 creator A5032122445 @default.
- W3203790036 creator A5035520440 @default.
- W3203790036 creator A5083536113 @default.
- W3203790036 date "2021-09-28" @default.
- W3203790036 modified "2023-10-15" @default.
- W3203790036 title "A Deep Multimodal Model for Predicting Affective Responses Evoked by Movies Based on Shot Segmentation" @default.
- W3203790036 cites W147964346 @default.
- W3203790036 cites W2044807399 @default.
- W3203790036 cites W2057137426 @default.
- W3203790036 cites W2064675550 @default.
- W3203790036 cites W2114025269 @default.
- W3203790036 cites W2121407761 @default.
- W3203790036 cites W2151617679 @default.
- W3203790036 cites W2157331557 @default.
- W3203790036 cites W2194775991 @default.
- W3203790036 cites W2404368331 @default.
- W3203790036 cites W2526050071 @default.
- W3203790036 cites W2593116425 @default.
- W3203790036 cites W2618799552 @default.
- W3203790036 cites W2732026016 @default.
- W3203790036 cites W2742409927 @default.
- W3203790036 cites W2808415943 @default.
- W3203790036 cites W2964241181 @default.
- W3203790036 cites W2964350391 @default.
- W3203790036 cites W2990786216 @default.
- W3203790036 cites W2997662171 @default.
- W3203790036 cites W3034364644 @default.
- W3203790036 cites W3095481265 @default.
- W3203790036 cites W3101998545 @default.
- W3203790036 cites W3160168209 @default.
- W3203790036 cites W3194152981 @default.
- W3203790036 doi "https://doi.org/10.1155/2021/7650483" @default.
- W3203790036 hasPublicationYear "2021" @default.
- W3203790036 type Work @default.
- W3203790036 sameAs 3203790036 @default.
- W3203790036 citedByCount "1" @default.
- W3203790036 countsByYear W32037900362022 @default.
- W3203790036 crossrefType "journal-article" @default.
- W3203790036 hasAuthorship W3203790036A5009301758 @default.
- W3203790036 hasAuthorship W3203790036A5032122445 @default.
- W3203790036 hasAuthorship W3203790036A5035520440 @default.
- W3203790036 hasAuthorship W3203790036A5083536113 @default.
- W3203790036 hasBestOaLocation W32037900361 @default.
- W3203790036 hasConcept C138885662 @default.
- W3203790036 hasConcept C153180895 @default.
- W3203790036 hasConcept C154945302 @default.
- W3203790036 hasConcept C17744445 @default.
- W3203790036 hasConcept C178790620 @default.
- W3203790036 hasConcept C184337299 @default.
- W3203790036 hasConcept C185592680 @default.
- W3203790036 hasConcept C199360897 @default.
- W3203790036 hasConcept C199539241 @default.
- W3203790036 hasConcept C2776359362 @default.
- W3203790036 hasConcept C2776401178 @default.
- W3203790036 hasConcept C2778344882 @default.
- W3203790036 hasConcept C2779227376 @default.
- W3203790036 hasConcept C28490314 @default.
- W3203790036 hasConcept C3017588708 @default.
- W3203790036 hasConcept C41008148 @default.
- W3203790036 hasConcept C41895202 @default.
- W3203790036 hasConcept C49774154 @default.
- W3203790036 hasConcept C89600930 @default.
- W3203790036 hasConcept C94625758 @default.
- W3203790036 hasConceptScore W3203790036C138885662 @default.
- W3203790036 hasConceptScore W3203790036C153180895 @default.
- W3203790036 hasConceptScore W3203790036C154945302 @default.
- W3203790036 hasConceptScore W3203790036C17744445 @default.
- W3203790036 hasConceptScore W3203790036C178790620 @default.
- W3203790036 hasConceptScore W3203790036C184337299 @default.
- W3203790036 hasConceptScore W3203790036C185592680 @default.
- W3203790036 hasConceptScore W3203790036C199360897 @default.
- W3203790036 hasConceptScore W3203790036C199539241 @default.
- W3203790036 hasConceptScore W3203790036C2776359362 @default.
- W3203790036 hasConceptScore W3203790036C2776401178 @default.
- W3203790036 hasConceptScore W3203790036C2778344882 @default.
- W3203790036 hasConceptScore W3203790036C2779227376 @default.
- W3203790036 hasConceptScore W3203790036C28490314 @default.
- W3203790036 hasConceptScore W3203790036C3017588708 @default.
- W3203790036 hasConceptScore W3203790036C41008148 @default.
- W3203790036 hasConceptScore W3203790036C41895202 @default.
- W3203790036 hasConceptScore W3203790036C49774154 @default.
- W3203790036 hasConceptScore W3203790036C89600930 @default.
- W3203790036 hasConceptScore W3203790036C94625758 @default.
- W3203790036 hasFunder F4320335787 @default.
- W3203790036 hasLocation W32037900361 @default.
- W3203790036 hasOpenAccess W3203790036 @default.
- W3203790036 hasPrimaryLocation W32037900361 @default.
- W3203790036 hasRelatedWork W2352863388 @default.
- W3203790036 hasRelatedWork W2387675639 @default.
- W3203790036 hasRelatedWork W2510758617 @default.
- W3203790036 hasRelatedWork W2532775738 @default.
- W3203790036 hasRelatedWork W2546942002 @default.
- W3203790036 hasRelatedWork W2754350655 @default.
- W3203790036 hasRelatedWork W2897195263 @default.
- W3203790036 hasRelatedWork W3095523211 @default.