Matches in SemOpenAlex for { <https://semopenalex.org/work/W3136355873> ?p ?o ?g. }
Showing items 1 to 85 of
85
with 100 items per page.
- W3136355873 endingPage "238" @default.
- W3136355873 startingPage "227" @default.
- W3136355873 abstract "AbstractIn this paper, we develop an approach for the continuous prediction of affective video contents by employing multi-modal features and multi-task learning in valence and arousal dimensions. In the proposed framework, three deep features SoundNet, VGGish and YAMNet are selected for the audio modality. And we also extract the global visual features and moving information through adapting two VGG19 models, whose inputs are separately the key frames of sample videos as well as their optical-flow images. Further by fusing the audio features together with the visual ones, the multi-task learning strategies in valence and arousal dimensions are also given to improve the regression performance. The performance of the audio and visual features selected is evaluated on the Emotional Impact of Movies Task 2018 (EIMT18) in the experiments. Compared to the competitive teams of EIMT18, our approach can obtain better MSE result in the arousal dimension and much better PCC performance in the valence dimension, along with the comparable PCC metric in the dimension of arousal and slightly lower MSE metric in the dimension of valence, indicating that the joint prediction of valence and arousal dimensions can help to improve the regression performance in both of the valence and arousal dimensions, especially on the metric of PCC.KeywordsAffective video content analysisMulti-modal featuresMulti-task learningLIRIS-ACCEDELSTM" @default.
- W3136355873 created "2021-03-29" @default.
- W3136355873 creator A5000490843 @default.
- W3136355873 creator A5005117719 @default.
- W3136355873 creator A5074370589 @default.
- W3136355873 creator A5074654486 @default.
- W3136355873 creator A5087163278 @default.
- W3136355873 date "2021-01-01" @default.
- W3136355873 modified "2023-09-26" @default.
- W3136355873 title "Synchronous Prediction of Continuous Affective Video Content Based on Multi-task Learning" @default.
- W3136355873 cites W2044807399 @default.
- W3136355873 cites W2064675550 @default.
- W3136355873 cites W2422305492 @default.
- W3136355873 cites W2526050071 @default.
- W3136355873 cites W2580719207 @default.
- W3136355873 cites W2593116425 @default.
- W3136355873 cites W2963782415 @default.
- W3136355873 cites W2990604978 @default.
- W3136355873 cites W3096513139 @default.
- W3136355873 doi "https://doi.org/10.1007/978-981-16-1194-0_20" @default.
- W3136355873 hasPublicationYear "2021" @default.
- W3136355873 type Work @default.
- W3136355873 sameAs 3136355873 @default.
- W3136355873 citedByCount "0" @default.
- W3136355873 crossrefType "book-chapter" @default.
- W3136355873 hasAuthorship W3136355873A5000490843 @default.
- W3136355873 hasAuthorship W3136355873A5005117719 @default.
- W3136355873 hasAuthorship W3136355873A5074370589 @default.
- W3136355873 hasAuthorship W3136355873A5074654486 @default.
- W3136355873 hasAuthorship W3136355873A5087163278 @default.
- W3136355873 hasConcept C119857082 @default.
- W3136355873 hasConcept C121332964 @default.
- W3136355873 hasConcept C153180895 @default.
- W3136355873 hasConcept C154945302 @default.
- W3136355873 hasConcept C15744967 @default.
- W3136355873 hasConcept C162324750 @default.
- W3136355873 hasConcept C168900304 @default.
- W3136355873 hasConcept C187736073 @default.
- W3136355873 hasConcept C202444582 @default.
- W3136355873 hasConcept C2780451532 @default.
- W3136355873 hasConcept C28006648 @default.
- W3136355873 hasConcept C28490314 @default.
- W3136355873 hasConcept C33676613 @default.
- W3136355873 hasConcept C33923547 @default.
- W3136355873 hasConcept C36951298 @default.
- W3136355873 hasConcept C41008148 @default.
- W3136355873 hasConcept C62520636 @default.
- W3136355873 hasConcept C77805123 @default.
- W3136355873 hasConceptScore W3136355873C119857082 @default.
- W3136355873 hasConceptScore W3136355873C121332964 @default.
- W3136355873 hasConceptScore W3136355873C153180895 @default.
- W3136355873 hasConceptScore W3136355873C154945302 @default.
- W3136355873 hasConceptScore W3136355873C15744967 @default.
- W3136355873 hasConceptScore W3136355873C162324750 @default.
- W3136355873 hasConceptScore W3136355873C168900304 @default.
- W3136355873 hasConceptScore W3136355873C187736073 @default.
- W3136355873 hasConceptScore W3136355873C202444582 @default.
- W3136355873 hasConceptScore W3136355873C2780451532 @default.
- W3136355873 hasConceptScore W3136355873C28006648 @default.
- W3136355873 hasConceptScore W3136355873C28490314 @default.
- W3136355873 hasConceptScore W3136355873C33676613 @default.
- W3136355873 hasConceptScore W3136355873C33923547 @default.
- W3136355873 hasConceptScore W3136355873C36951298 @default.
- W3136355873 hasConceptScore W3136355873C41008148 @default.
- W3136355873 hasConceptScore W3136355873C62520636 @default.
- W3136355873 hasConceptScore W3136355873C77805123 @default.
- W3136355873 hasLocation W31363558731 @default.
- W3136355873 hasOpenAccess W3136355873 @default.
- W3136355873 hasPrimaryLocation W31363558731 @default.
- W3136355873 hasRelatedWork W10121358 @default.
- W3136355873 hasRelatedWork W10412386 @default.
- W3136355873 hasRelatedWork W14789944 @default.
- W3136355873 hasRelatedWork W1577664 @default.
- W3136355873 hasRelatedWork W2580338 @default.
- W3136355873 hasRelatedWork W2883085 @default.
- W3136355873 hasRelatedWork W7120470 @default.
- W3136355873 hasRelatedWork W7655147 @default.
- W3136355873 hasRelatedWork W8718456 @default.
- W3136355873 hasRelatedWork W9989431 @default.
- W3136355873 isParatext "false" @default.
- W3136355873 isRetracted "false" @default.
- W3136355873 magId "3136355873" @default.
- W3136355873 workType "book-chapter" @default.