Matches in SemOpenAlex for { <https://semopenalex.org/work/W2402929383> ?p ?o ?g. }
Showing items 1 to 83 of
83
with 100 items per page.
- W2402929383 abstract "Emotional Speech Processing and Language Knowledge Conor I. Frye (cifrye@cogsci.ucsd.edu) Department of Cognitive Science, UC San Diego, 9500 Gilman Dr. M/S 0515 La Jolla, CA 92093 USA Sarah C. Creel (creel@cogsci.ucsd.edu) Department of Cognitive Science, UC San Diego, 9500 Gilman Dr. M/S 0515 La Jolla, CA 92093 USA Abstract Language-specific recognition of vocal affect How does language knowledge affect processing of paralinguistic information—vocal properties that are not directly related to understanding words? This study investigates links between a listener’s native language, any other languages they may have experience in, and the ability to identify vocal emotional information in those languages. The study focuses on two particular classes of languages: those with lexical tone, such as Mandarin Chinese, which use pitch properties to distinguish otherwise-identical words; and those without lexical tone, such as English. English listeners and bilingual Mandarin-English listeners listened to sentences and categorized the emotional content of English and Mandarin sentences. Half of the sentences were presented normally; the other half were low-pass filtered to remove all but prosodic cues (pitch and timing). English listeners were better at identifying emotions in English sentences, while bilinguals were equally good at identifying emotions in both languages. This indicates better overall emotion recognition from prosody alone for listeners more familiar with a language. It may point to a connection between tone language experience and augmented paralinguistic processing capabilities. Keywords: speech perception; paralinguistic perception; voice; language background; individual differences; bilingualism Introduction Spoken language as a medium is not just a symbol system of discrete speech sounds; it is also replete with cues to the talker’s identity, region of origin, and emotional state. Although much research has been devoted to understanding how exposure to a language affects speech sound identification (Kuhl, 1994), almost no one has asked how language knowledge affects processing of paralinguistic information—vocal properties that are not directly related to understanding words like speech rate and pitch changes (see Thompson & Ballkwill, 2006, for an exception). Emotion in the voice is thought to be conveyed by these paralinguistic cues. Though differing languages seem to use similar vocal acoustic cues for the “basic emotions” in non-speech vocalizations such as laughter and crying, it is not clear how readily listeners perceive these emotional cues cross- linguistically when only presented with the auditory signal (Sauter et al., 2010). One likely set of cues that listeners use to identify vocal emotion is prosody: pitch and timing information. Happy speech, for instance, typically has more variable pitch and volume, higher overall pitch level, and a faster speaking rate, whereas sad speech sounds exhibit lower average pitch, attenuated loudness and pitch variation, and a slower pace of speech (Morton & Trehub, 2001). Previous work demonstrates that humans use paralinguistic cues during speech to alert co-communicators to their current emotional state (Kehrein, 2002). However, this ability to attribute certain paralinguistic cues to particular emotional states may not be fully present at birth, but may require learning through lengthy exposure to one’s native language. One indication of the learned nature of paralinguistic processing is that children experience difficulty in identifying vocal emotional cues (Morton & Trehub, 2001); for instance, 6-year-olds who hear “my mommy gave me a treat” with “sad” emotional prosody will report that the speaker sounded happy, suggesting that they are still learning the mapping between particular speech patterns and emotional states. Further research suggests that these learned aspects may be language-specific (Thompson & Balkwill, 2006), though those authors do not pinpoint particular cues that may be relevant, nor do they offer a hypothesis as to what level of fluency one needs to access the learned aspects of emotional speech. In a related area, speaker recognition shows some language specificity in infants (Johnson et al., 2011) and adults (Bregman & Creel, 2012). If encoding of vocal emotional information works similarly to encoding of voices, then good emotional recognition within a language may be dependent on lengthy language experience, and may not generalize to emotion recognition in other languages. General ability to recognize vocal emotion On the other hand, there is evidence that expertise in processing the cues that communicate vocal emotion may generalize widely across domains. This implies that better attention to or encoding of pitch for another purpose or in another domain may lead to better perception of vocal emotion. For example, certain types of languages have been claimed to boost pitch perception abilities: Speakers of tone languages such as Mandarin are better at making relative" @default.
- W2402929383 created "2016-06-24" @default.
- W2402929383 creator A5019356280 @default.
- W2402929383 creator A5063564122 @default.
- W2402929383 date "2013-01-01" @default.
- W2402929383 modified "2023-09-23" @default.
- W2402929383 title "Emotional Speech Processing and Language Knowledge" @default.
- W2402929383 cites W1966648033 @default.
- W2402929383 cites W1981618987 @default.
- W2402929383 cites W1989769398 @default.
- W2402929383 cites W2046255764 @default.
- W2402929383 cites W2048765566 @default.
- W2402929383 cites W2071941900 @default.
- W2402929383 cites W2126528247 @default.
- W2402929383 cites W2130059190 @default.
- W2402929383 cites W2140351615 @default.
- W2402929383 cites W2152901518 @default.
- W2402929383 cites W2157599703 @default.
- W2402929383 cites W2169860302 @default.
- W2402929383 cites W2402400324 @default.
- W2402929383 cites W2606327620 @default.
- W2402929383 hasPublicationYear "2013" @default.
- W2402929383 type Work @default.
- W2402929383 sameAs 2402929383 @default.
- W2402929383 citedByCount "0" @default.
- W2402929383 crossrefType "journal-article" @default.
- W2402929383 hasAuthorship W2402929383A5019356280 @default.
- W2402929383 hasAuthorship W2402929383A5063564122 @default.
- W2402929383 hasConcept C133378560 @default.
- W2402929383 hasConcept C138885662 @default.
- W2402929383 hasConcept C138954614 @default.
- W2402929383 hasConcept C15744967 @default.
- W2402929383 hasConcept C169760540 @default.
- W2402929383 hasConcept C169900460 @default.
- W2402929383 hasConcept C26760741 @default.
- W2402929383 hasConcept C2776035688 @default.
- W2402929383 hasConcept C2780583480 @default.
- W2402929383 hasConcept C41895202 @default.
- W2402929383 hasConcept C46312422 @default.
- W2402929383 hasConcept C49876356 @default.
- W2402929383 hasConcept C542774811 @default.
- W2402929383 hasConcept C99209842 @default.
- W2402929383 hasConceptScore W2402929383C133378560 @default.
- W2402929383 hasConceptScore W2402929383C138885662 @default.
- W2402929383 hasConceptScore W2402929383C138954614 @default.
- W2402929383 hasConceptScore W2402929383C15744967 @default.
- W2402929383 hasConceptScore W2402929383C169760540 @default.
- W2402929383 hasConceptScore W2402929383C169900460 @default.
- W2402929383 hasConceptScore W2402929383C26760741 @default.
- W2402929383 hasConceptScore W2402929383C2776035688 @default.
- W2402929383 hasConceptScore W2402929383C2780583480 @default.
- W2402929383 hasConceptScore W2402929383C41895202 @default.
- W2402929383 hasConceptScore W2402929383C46312422 @default.
- W2402929383 hasConceptScore W2402929383C49876356 @default.
- W2402929383 hasConceptScore W2402929383C542774811 @default.
- W2402929383 hasConceptScore W2402929383C99209842 @default.
- W2402929383 hasIssue "35" @default.
- W2402929383 hasOpenAccess W2402929383 @default.
- W2402929383 hasRelatedWork W1964833466 @default.
- W2402929383 hasRelatedWork W1973763419 @default.
- W2402929383 hasRelatedWork W1974309499 @default.
- W2402929383 hasRelatedWork W1974727132 @default.
- W2402929383 hasRelatedWork W1981025417 @default.
- W2402929383 hasRelatedWork W1989458496 @default.
- W2402929383 hasRelatedWork W1992172629 @default.
- W2402929383 hasRelatedWork W2013164186 @default.
- W2402929383 hasRelatedWork W2045466175 @default.
- W2402929383 hasRelatedWork W2058787332 @default.
- W2402929383 hasRelatedWork W2084371103 @default.
- W2402929383 hasRelatedWork W2120818497 @default.
- W2402929383 hasRelatedWork W2138396459 @default.
- W2402929383 hasRelatedWork W2330486232 @default.
- W2402929383 hasRelatedWork W2515653097 @default.
- W2402929383 hasRelatedWork W2794097103 @default.
- W2402929383 hasRelatedWork W2913561567 @default.
- W2402929383 hasRelatedWork W2947141759 @default.
- W2402929383 hasRelatedWork W29724405 @default.
- W2402929383 hasRelatedWork W3112061954 @default.
- W2402929383 hasVolume "35" @default.
- W2402929383 isParatext "false" @default.
- W2402929383 isRetracted "false" @default.
- W2402929383 magId "2402929383" @default.
- W2402929383 workType "article" @default.