Matches in SemOpenAlex for { <https://semopenalex.org/work/W4387682244> ?p ?o ?g. }
Showing items 1 to 55 of
55
with 100 items per page.
- W4387682244 endingPage "1" @default.
- W4387682244 startingPage "1" @default.
- W4387682244 abstract "Though acoustic speech emotion recognition has been studied for a while, bimodal speech emotion recognition using both acoustic and text has gained momentum since speech emotion recognition doesn’t only involve the acoustic modality. However, there is less review work on the available bimodal speech emotion recognition (SER) research. The review works available mostly concentrate on the use of convolution neural networks (CNNs) and recurrent neural networks (RNNs). However, recent deep learning techniques like attention mechanisms and fusion strategies have shaped the bimodal SER research without explicit analysis of their significance when used singly or in combination with the traditional deep learning techniques. We therefore, review the recently published literature that involves these deep learning techniques in this paper to ascertain the current trends and challenges of bimodal SER research that have hampered it to be fully deployed in the natural environment for off-the-shelf SER applications. In addition, we carried out experiments to ascertain the optimal combination of acoustic features and the significance of the attention mechanisms and their combination with the traditional deep learning techniques. We propose a multi-technique model called the deep learning-based multi-learning model for emotion recognition (DBMER) that operates with multi-learning capabilities of CNNs, RNNs, and multi-head attention mechanisms. We noted that attention mechanisms play a pivotal role in the performance of bimodal dyadic SER systems. However, few publicly available datasets, the difficulty in acquisition of bimodal SER data, cross-corpus and multilingual studies remain open problems in bimodal SER research. Our experiments on the proposed DBMER model showed that though each of the deep learning techniques benefits the task, the results are more accurate and robust when they are used in careful combination with multi-level fusion approaches." @default.
- W4387682244 created "2023-10-17" @default.
- W4387682244 creator A5062249071 @default.
- W4387682244 creator A5063441690 @default.
- W4387682244 creator A5086165395 @default.
- W4387682244 date "2023-01-01" @default.
- W4387682244 modified "2023-10-17" @default.
- W4387682244 title "Deep Learning Approaches for Bimodal Speech Emotion Recognition: Advancements, Challenges, and a Multi-Learning Model" @default.
- W4387682244 doi "https://doi.org/10.1109/access.2023.3325037" @default.
- W4387682244 hasPublicationYear "2023" @default.
- W4387682244 type Work @default.
- W4387682244 citedByCount "0" @default.
- W4387682244 crossrefType "journal-article" @default.
- W4387682244 hasAuthorship W4387682244A5062249071 @default.
- W4387682244 hasAuthorship W4387682244A5063441690 @default.
- W4387682244 hasAuthorship W4387682244A5086165395 @default.
- W4387682244 hasBestOaLocation W43876822441 @default.
- W4387682244 hasConcept C108583219 @default.
- W4387682244 hasConcept C119857082 @default.
- W4387682244 hasConcept C147168706 @default.
- W4387682244 hasConcept C154945302 @default.
- W4387682244 hasConcept C2777438025 @default.
- W4387682244 hasConcept C28490314 @default.
- W4387682244 hasConcept C2984842247 @default.
- W4387682244 hasConcept C41008148 @default.
- W4387682244 hasConcept C50644808 @default.
- W4387682244 hasConcept C81363708 @default.
- W4387682244 hasConceptScore W4387682244C108583219 @default.
- W4387682244 hasConceptScore W4387682244C119857082 @default.
- W4387682244 hasConceptScore W4387682244C147168706 @default.
- W4387682244 hasConceptScore W4387682244C154945302 @default.
- W4387682244 hasConceptScore W4387682244C2777438025 @default.
- W4387682244 hasConceptScore W4387682244C28490314 @default.
- W4387682244 hasConceptScore W4387682244C2984842247 @default.
- W4387682244 hasConceptScore W4387682244C41008148 @default.
- W4387682244 hasConceptScore W4387682244C50644808 @default.
- W4387682244 hasConceptScore W4387682244C81363708 @default.
- W4387682244 hasLocation W43876822441 @default.
- W4387682244 hasOpenAccess W4387682244 @default.
- W4387682244 hasPrimaryLocation W43876822441 @default.
- W4387682244 hasRelatedWork W2799384463 @default.
- W4387682244 hasRelatedWork W3029198973 @default.
- W4387682244 hasRelatedWork W3133861977 @default.
- W4387682244 hasRelatedWork W3167935049 @default.
- W4387682244 hasRelatedWork W3193565141 @default.
- W4387682244 hasRelatedWork W3193857078 @default.
- W4387682244 hasRelatedWork W3208304128 @default.
- W4387682244 hasRelatedWork W4226493464 @default.
- W4387682244 hasRelatedWork W4312417841 @default.
- W4387682244 hasRelatedWork W4377865163 @default.
- W4387682244 isParatext "false" @default.
- W4387682244 isRetracted "false" @default.
- W4387682244 workType "article" @default.