Matches in SemOpenAlex for { <https://semopenalex.org/work/W112273709> ?p ?o ?g. }
- W112273709 abstract "Emotion has been shown to have a large impact on our interactions with people and devices. In our daily lives, however, these emotions are not taken into account when working with our computers and other machines. If our devices could pick up on social cues, for instance in relation to disinterest, the usability of various systems could be improved. Current software allows us to detect specific movements in people's faces from video recordings. Using these movements, facial expressions can be linked to specific emotions, allowing for the incorporation of this information in various systems. One application would be to allow a TV to monitor its viewer, suggesting alternative videos to watch when negative emotions are shown. An often used system to describe these specific facial muscle movements is the Facial Action Coding System (FACS). Despite the widespread use of this method, little research has been conducted on the use of FACS measurements to classify viewer emotion of entire videos. In this thesis we evaluated whether it is possible to use FACS measurements to perform classification on emotional labels in real-world environments. To assess the possibility of this application, we conducted a wide range of experiments. We selected an existing method that uses a public dataset of naturally occurring emotions and reproduced this method. Additionally, we developed our own, alternative method. In a novel comparison we evaluated the performance of both methods on three different datasets, selected to cover a range of demographics and experimental settings (highly controlled to near-living-room conditions). Furthermore we evaluated the inclusion of the TV viewer's head orientation. This proved to be beneficial for two datasets. One of the datasets used in our work provided access to heart rate data of the subjects. Based on this data, we included the subject's heart rate and other derived features. We found that this improved performance when training using the history of a specific person. Finally we performed a novel experiment in which we asked a crowd of laymen to annotate videos from each of the three datasets. This multi-dataset evaluation provided us with a reference of how well humans were able to detect the emotion experienced by the subjects using their facial expressions, allowing for a direct comparison with automatic classification methods. Overall we found that (1) using different data processing and aggregation, classification performance can improve and (2) that human annotation of emotional responses offers a way to compare classification difficulty between datasets and performance between classification methods." @default.
- W112273709 created "2016-06-24" @default.
- W112273709 creator A5005087979 @default.
- W112273709 date "2014-08-26" @default.
- W112273709 modified "2023-09-27" @default.
- W112273709 title "Classification of valence using facial expressions of TV-viewers" @default.
- W112273709 cites W1488437447 @default.
- W112273709 cites W1520861770 @default.
- W112273709 cites W153428097 @default.
- W112273709 cites W178801863 @default.
- W112273709 cites W1809872410 @default.
- W112273709 cites W1941267885 @default.
- W112273709 cites W1965696296 @default.
- W112273709 cites W1966797434 @default.
- W112273709 cites W1968600824 @default.
- W112273709 cites W1993446323 @default.
- W112273709 cites W2009926717 @default.
- W112273709 cites W2019312772 @default.
- W112273709 cites W2035941748 @default.
- W112273709 cites W2040878866 @default.
- W112273709 cites W2049437801 @default.
- W112273709 cites W2060201548 @default.
- W112273709 cites W2064149108 @default.
- W112273709 cites W2083625724 @default.
- W112273709 cites W2099585577 @default.
- W112273709 cites W2100189108 @default.
- W112273709 cites W2101234009 @default.
- W112273709 cites W2103943262 @default.
- W112273709 cites W2106390385 @default.
- W112273709 cites W2111926505 @default.
- W112273709 cites W2112393832 @default.
- W112273709 cites W2117141451 @default.
- W112273709 cites W2117645142 @default.
- W112273709 cites W2120132741 @default.
- W112273709 cites W2123649031 @default.
- W112273709 cites W2126292754 @default.
- W112273709 cites W2133180260 @default.
- W112273709 cites W2134031328 @default.
- W112273709 cites W2135837828 @default.
- W112273709 cites W2137495700 @default.
- W112273709 cites W2138745909 @default.
- W112273709 cites W2139212933 @default.
- W112273709 cites W2143426320 @default.
- W112273709 cites W2146682940 @default.
- W112273709 cites W2146780613 @default.
- W112273709 cites W2149628368 @default.
- W112273709 cites W2153782322 @default.
- W112273709 cites W2161634108 @default.
- W112273709 cites W2164777277 @default.
- W112273709 cites W2164985412 @default.
- W112273709 cites W2171939880 @default.
- W112273709 cites W2182577255 @default.
- W112273709 cites W2339343773 @default.
- W112273709 cites W2411694054 @default.
- W112273709 cites W3034751874 @default.
- W112273709 cites W652662681 @default.
- W112273709 cites W2995034616 @default.
- W112273709 cites W2995846001 @default.
- W112273709 hasPublicationYear "2014" @default.
- W112273709 type Work @default.
- W112273709 sameAs 112273709 @default.
- W112273709 citedByCount "0" @default.
- W112273709 crossrefType "journal-article" @default.
- W112273709 hasAuthorship W112273709A5005087979 @default.
- W112273709 hasConcept C105795698 @default.
- W112273709 hasConcept C107457646 @default.
- W112273709 hasConcept C121332964 @default.
- W112273709 hasConcept C154945302 @default.
- W112273709 hasConcept C168900304 @default.
- W112273709 hasConcept C170130773 @default.
- W112273709 hasConcept C179518139 @default.
- W112273709 hasConcept C195704467 @default.
- W112273709 hasConcept C33923547 @default.
- W112273709 hasConcept C40346341 @default.
- W112273709 hasConcept C41008148 @default.
- W112273709 hasConcept C62520636 @default.
- W112273709 hasConcept C6438553 @default.
- W112273709 hasConceptScore W112273709C105795698 @default.
- W112273709 hasConceptScore W112273709C107457646 @default.
- W112273709 hasConceptScore W112273709C121332964 @default.
- W112273709 hasConceptScore W112273709C154945302 @default.
- W112273709 hasConceptScore W112273709C168900304 @default.
- W112273709 hasConceptScore W112273709C170130773 @default.
- W112273709 hasConceptScore W112273709C179518139 @default.
- W112273709 hasConceptScore W112273709C195704467 @default.
- W112273709 hasConceptScore W112273709C33923547 @default.
- W112273709 hasConceptScore W112273709C40346341 @default.
- W112273709 hasConceptScore W112273709C41008148 @default.
- W112273709 hasConceptScore W112273709C62520636 @default.
- W112273709 hasConceptScore W112273709C6438553 @default.
- W112273709 hasLocation W1122737091 @default.
- W112273709 hasOpenAccess W112273709 @default.
- W112273709 hasPrimaryLocation W1122737091 @default.
- W112273709 hasRelatedWork W1159756647 @default.
- W112273709 hasRelatedWork W2038644873 @default.
- W112273709 hasRelatedWork W2407034139 @default.
- W112273709 hasRelatedWork W2426188534 @default.
- W112273709 hasRelatedWork W2552197931 @default.
- W112273709 hasRelatedWork W2765182984 @default.
- W112273709 hasRelatedWork W2793297724 @default.