Matches in SemOpenAlex for { <https://semopenalex.org/work/W3136302031> ?p ?o ?g. }
Showing items 1 to 85 of
85
with 100 items per page.
- W3136302031 endingPage "104" @default.
- W3136302031 startingPage "93" @default.
- W3136302031 abstract "This work presents a method for lexical tone classification in audio-visual speech. The method is applied to a speech data set consisting of syllables and words produced by a female native speaker of Cantonese. The data were recorded in an audio-visual speech production experiment. The visual component of speech was measured by tracking the positions of active markers placed on the speaker's face, whereas the acoustic component was measured with an ordinary microphone. A pitch tracking algorithm is used to estimate F0 from the acoustic signal. A procedure for head motion compensation is applied to the tracked marker positions in order to separate the head and face motion components. The data are then organized into four signal groups: F0, Face, Head, Face+Head. The signals in each of these groups are parameterized by means of a polynomial approximation and then used to train an LDA (Linear Discriminant Analysis) classifier that maps the input signals into one of the output classes (the lexical tones of the language). One classifier is trained for each signal group. The ability of each signal group to predict the correct lexical tones was assessed by the accuracy of the corresponding LDA classifier. The accuracy of the classifiers was obtained by means of a k-fold cross validation method. The classifiers for all signal groups performed above chance, with F0 achieving the highest accuracy, followed by Face+Head, Face, and Head, respectively. The differences in performance between all signal groups were statistically significant." @default.
- W3136302031 created "2021-03-29" @default.
- W3136302031 creator A5022211884 @default.
- W3136302031 creator A5024821854 @default.
- W3136302031 creator A5054147303 @default.
- W3136302031 creator A5065891370 @default.
- W3136302031 date "2020-09-09" @default.
- W3136302031 modified "2023-10-16" @default.
- W3136302031 title "method for lexical tone classification in audio-visual speech" @default.
- W3136302031 cites W1560013842 @default.
- W3136302031 cites W1573484412 @default.
- W3136302031 cites W1975370719 @default.
- W3136302031 cites W2002331689 @default.
- W3136302031 cites W2008120082 @default.
- W3136302031 cites W2014621385 @default.
- W3136302031 cites W2015394094 @default.
- W3136302031 cites W2040578886 @default.
- W3136302031 cites W2052591833 @default.
- W3136302031 cites W2054721804 @default.
- W3136302031 cites W2087430784 @default.
- W3136302031 cites W2091288983 @default.
- W3136302031 cites W2092464820 @default.
- W3136302031 cites W2107831318 @default.
- W3136302031 cites W2127211243 @default.
- W3136302031 cites W2158709575 @default.
- W3136302031 cites W2401944768 @default.
- W3136302031 cites W2487770199 @default.
- W3136302031 cites W2897036885 @default.
- W3136302031 cites W2968281181 @default.
- W3136302031 cites W2997029545 @default.
- W3136302031 doi "https://doi.org/10.20396/joss.v9i00.14960" @default.
- W3136302031 hasPublicationYear "2020" @default.
- W3136302031 type Work @default.
- W3136302031 sameAs 3136302031 @default.
- W3136302031 citedByCount "0" @default.
- W3136302031 crossrefType "journal-article" @default.
- W3136302031 hasAuthorship W3136302031A5022211884 @default.
- W3136302031 hasAuthorship W3136302031A5024821854 @default.
- W3136302031 hasAuthorship W3136302031A5054147303 @default.
- W3136302031 hasAuthorship W3136302031A5065891370 @default.
- W3136302031 hasBestOaLocation W31363020311 @default.
- W3136302031 hasConcept C13895895 @default.
- W3136302031 hasConcept C153180895 @default.
- W3136302031 hasConcept C154945302 @default.
- W3136302031 hasConcept C2778263558 @default.
- W3136302031 hasConcept C28490314 @default.
- W3136302031 hasConcept C41008148 @default.
- W3136302031 hasConcept C61328038 @default.
- W3136302031 hasConcept C64922751 @default.
- W3136302031 hasConcept C68115822 @default.
- W3136302031 hasConcept C69738355 @default.
- W3136302031 hasConcept C76155785 @default.
- W3136302031 hasConcept C95623464 @default.
- W3136302031 hasConceptScore W3136302031C13895895 @default.
- W3136302031 hasConceptScore W3136302031C153180895 @default.
- W3136302031 hasConceptScore W3136302031C154945302 @default.
- W3136302031 hasConceptScore W3136302031C2778263558 @default.
- W3136302031 hasConceptScore W3136302031C28490314 @default.
- W3136302031 hasConceptScore W3136302031C41008148 @default.
- W3136302031 hasConceptScore W3136302031C61328038 @default.
- W3136302031 hasConceptScore W3136302031C64922751 @default.
- W3136302031 hasConceptScore W3136302031C68115822 @default.
- W3136302031 hasConceptScore W3136302031C69738355 @default.
- W3136302031 hasConceptScore W3136302031C76155785 @default.
- W3136302031 hasConceptScore W3136302031C95623464 @default.
- W3136302031 hasLocation W31363020311 @default.
- W3136302031 hasOpenAccess W3136302031 @default.
- W3136302031 hasPrimaryLocation W31363020311 @default.
- W3136302031 hasRelatedWork W1580724753 @default.
- W3136302031 hasRelatedWork W2146076056 @default.
- W3136302031 hasRelatedWork W2353567328 @default.
- W3136302031 hasRelatedWork W2357678230 @default.
- W3136302031 hasRelatedWork W2380927352 @default.
- W3136302031 hasRelatedWork W2497106782 @default.
- W3136302031 hasRelatedWork W2793122029 @default.
- W3136302031 hasRelatedWork W3111953316 @default.
- W3136302031 hasRelatedWork W3136302031 @default.
- W3136302031 hasRelatedWork W4322749604 @default.
- W3136302031 hasVolume "9" @default.
- W3136302031 isParatext "false" @default.
- W3136302031 isRetracted "false" @default.
- W3136302031 magId "3136302031" @default.
- W3136302031 workType "article" @default.