Matches in SemOpenAlex for { <https://semopenalex.org/work/W2743804937> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W2743804937 endingPage "455" @default.
- W2743804937 startingPage "445" @default.
- W2743804937 abstract "Most technical communication systems use speech compression codecs to save transmission bandwidth. A lot of development was made to guarantee a high speech intelligibility resulting in different compression techniques: Analysis-by-Synthesis, psychoacoustic modeling and a hybrid mode of both. Our first assumption is that the hybrid mode improves the speech intelligibility. But, enabling a natural spoken conversation also requires affective, namely emotional, information, contained in spoken language, to be intelligibly transmitted. Usually, compression methods are avoided for emotion recognition problems, as it is feared that compression degrades the acoustic characteristics needed for an accurate recognition [1]. By contrast, in our second assumption we state that the combination of psychoacoustic modeling and Analysis-by-Synthesis codecs could actually improve speech-based emotion recognition by removing certain parts of the acoustic signal that are considered “unnecessary”, while still containing the full emotional information. To test both assumptions, we conducted an ITU-recommended POLQA measuring as well as several emotion recognition experiments employing two different datasets to verify the generality of this assumption. We compared our results on the hybrid mode with Analysis-by-Synthesis-only and psychoacoustic modeling-only codecs. The hybrid mode does not show remarkable differences regarding the speech intelligibility, but it outperforms all other compression settings in the multi-class emotion recognition experiments and achieves even an $$sim $$ 3.3% absolute higher performance than the uncompressed samples." @default.
- W2743804937 created "2017-08-17" @default.
- W2743804937 creator A5033995312 @default.
- W2743804937 creator A5035201220 @default.
- W2743804937 creator A5047373591 @default.
- W2743804937 creator A5090085425 @default.
- W2743804937 date "2017-01-01" @default.
- W2743804937 modified "2023-10-16" @default.
- W2743804937 title "Improving Speech-Based Emotion Recognition by Using Psychoacoustic Modeling and Analysis-by-Synthesis" @default.
- W2743804937 cites W1504365409 @default.
- W2743804937 cites W175750906 @default.
- W2743804937 cites W2016839396 @default.
- W2743804937 cites W2026799997 @default.
- W2743804937 cites W2085662862 @default.
- W2743804937 cites W2109172886 @default.
- W2743804937 cites W2117292163 @default.
- W2743804937 cites W2133990480 @default.
- W2743804937 cites W2137639365 @default.
- W2743804937 cites W2156503193 @default.
- W2743804937 cites W2158061940 @default.
- W2743804937 cites W2180721986 @default.
- W2743804937 cites W2292595953 @default.
- W2743804937 cites W2321601937 @default.
- W2743804937 cites W2399288271 @default.
- W2743804937 cites W4252510720 @default.
- W2743804937 cites W4385826981 @default.
- W2743804937 doi "https://doi.org/10.1007/978-3-319-66429-3_44" @default.
- W2743804937 hasPublicationYear "2017" @default.
- W2743804937 type Work @default.
- W2743804937 sameAs 2743804937 @default.
- W2743804937 citedByCount "2" @default.
- W2743804937 countsByYear W27438049372018 @default.
- W2743804937 countsByYear W27438049372020 @default.
- W2743804937 crossrefType "book-chapter" @default.
- W2743804937 hasAuthorship W2743804937A5033995312 @default.
- W2743804937 hasAuthorship W2743804937A5035201220 @default.
- W2743804937 hasAuthorship W2743804937A5047373591 @default.
- W2743804937 hasAuthorship W2743804937A5090085425 @default.
- W2743804937 hasConcept C111472728 @default.
- W2743804937 hasConcept C138885662 @default.
- W2743804937 hasConcept C154945302 @default.
- W2743804937 hasConcept C161765866 @default.
- W2743804937 hasConcept C169760540 @default.
- W2743804937 hasConcept C26760741 @default.
- W2743804937 hasConcept C28490314 @default.
- W2743804937 hasConcept C41008148 @default.
- W2743804937 hasConcept C60048801 @default.
- W2743804937 hasConcept C76155785 @default.
- W2743804937 hasConcept C78548338 @default.
- W2743804937 hasConcept C86803240 @default.
- W2743804937 hasConcept C9940772 @default.
- W2743804937 hasConceptScore W2743804937C111472728 @default.
- W2743804937 hasConceptScore W2743804937C138885662 @default.
- W2743804937 hasConceptScore W2743804937C154945302 @default.
- W2743804937 hasConceptScore W2743804937C161765866 @default.
- W2743804937 hasConceptScore W2743804937C169760540 @default.
- W2743804937 hasConceptScore W2743804937C26760741 @default.
- W2743804937 hasConceptScore W2743804937C28490314 @default.
- W2743804937 hasConceptScore W2743804937C41008148 @default.
- W2743804937 hasConceptScore W2743804937C60048801 @default.
- W2743804937 hasConceptScore W2743804937C76155785 @default.
- W2743804937 hasConceptScore W2743804937C78548338 @default.
- W2743804937 hasConceptScore W2743804937C86803240 @default.
- W2743804937 hasConceptScore W2743804937C9940772 @default.
- W2743804937 hasLocation W27438049371 @default.
- W2743804937 hasOpenAccess W2743804937 @default.
- W2743804937 hasPrimaryLocation W27438049371 @default.
- W2743804937 hasRelatedWork W2168094886 @default.
- W2743804937 hasRelatedWork W2358502346 @default.
- W2743804937 hasRelatedWork W2359734170 @default.
- W2743804937 hasRelatedWork W2360518820 @default.
- W2743804937 hasRelatedWork W2365437481 @default.
- W2743804937 hasRelatedWork W2365974637 @default.
- W2743804937 hasRelatedWork W2367759895 @default.
- W2743804937 hasRelatedWork W2372102971 @default.
- W2743804937 hasRelatedWork W3182570050 @default.
- W2743804937 hasRelatedWork W2518981132 @default.
- W2743804937 isParatext "false" @default.
- W2743804937 isRetracted "false" @default.
- W2743804937 magId "2743804937" @default.
- W2743804937 workType "book-chapter" @default.