Matches in SemOpenAlex for { <https://semopenalex.org/work/W4288081307> ?p ?o ?g. }
- W4288081307 endingPage "5611" @default.
- W4288081307 startingPage "5611" @default.
- W4288081307 abstract "Automatic recognition of human emotions is not a trivial process. There are many factors affecting emotions internally and externally. Expressing emotions could also be performed in many ways such as text, speech, body gestures or even physiologically by physiological body responses. Emotion detection enables many applications such as adaptive user interfaces, interactive games, and human robot interaction and many more. The availability of advanced technologies such as mobiles, sensors, and data analytics tools led to the ability to collect data from various sources, which enabled researchers to predict human emotions accurately. Most current research uses them in the lab experiments for data collection. In this work, we use direct and real time sensor data to construct a subject-independent (generic) multi-modal emotion prediction model. This research integrates both on-body physiological markers, surrounding sensory data, and emotion measurements to achieve the following goals: (1) Collecting a multi-modal data set including environmental, body responses, and emotions. (2) Creating subject-independent Predictive models of emotional states based on fusing environmental and physiological variables. (3) Assessing ensemble learning methods and comparing their performance for creating a generic subject-independent model for emotion recognition with high accuracy and comparing the results with previous similar research. To achieve that, we conducted a real-world study “in the wild” with physiological and mobile sensors. Collecting the data-set is coming from participants walking around Minia university campus to create accurate predictive models. Various ensemble learning models (Bagging, Boosting, and Stacking) have been used, combining the following base algorithms (K Nearest Neighbor KNN, Decision Tree DT, Random Forest RF, and Support Vector Machine SVM) as base learners and DT as a meta-classifier. The results showed that, the ensemble stacking learner technique gave the best accuracy of 98.2% compared with other variants of ensemble learning methods. On the contrary, bagging and boosting methods gave (96.4%) and (96.6%) accuracy levels respectively." @default.
- W4288081307 created "2022-07-28" @default.
- W4288081307 creator A5034886364 @default.
- W4288081307 creator A5056436780 @default.
- W4288081307 creator A5061493803 @default.
- W4288081307 creator A5069389579 @default.
- W4288081307 date "2022-07-27" @default.
- W4288081307 modified "2023-10-17" @default.
- W4288081307 title "Evaluating Ensemble Learning Methods for Multi-Modal Emotion Recognition Using Sensor Data Fusion" @default.
- W4288081307 cites W1698203258 @default.
- W4288081307 cites W1963753144 @default.
- W4288081307 cites W1981303445 @default.
- W4288081307 cites W1985400193 @default.
- W4288081307 cites W1987060620 @default.
- W4288081307 cites W2002208808 @default.
- W4288081307 cites W2013815131 @default.
- W4288081307 cites W2028518778 @default.
- W4288081307 cites W2041660793 @default.
- W4288081307 cites W2052431898 @default.
- W4288081307 cites W2071878275 @default.
- W4288081307 cites W2078671978 @default.
- W4288081307 cites W2086337007 @default.
- W4288081307 cites W2096970808 @default.
- W4288081307 cites W2114580955 @default.
- W4288081307 cites W2127511193 @default.
- W4288081307 cites W2131274108 @default.
- W4288081307 cites W2133297572 @default.
- W4288081307 cites W2142603975 @default.
- W4288081307 cites W2145230962 @default.
- W4288081307 cites W2145710484 @default.
- W4288081307 cites W2158449659 @default.
- W4288081307 cites W2164368909 @default.
- W4288081307 cites W2164699598 @default.
- W4288081307 cites W2167557160 @default.
- W4288081307 cites W2171801645 @default.
- W4288081307 cites W2229721480 @default.
- W4288081307 cites W2235060430 @default.
- W4288081307 cites W2460431946 @default.
- W4288081307 cites W2462877185 @default.
- W4288081307 cites W2494177206 @default.
- W4288081307 cites W2518937691 @default.
- W4288081307 cites W2519256581 @default.
- W4288081307 cites W2587299955 @default.
- W4288081307 cites W2617151543 @default.
- W4288081307 cites W2754966749 @default.
- W4288081307 cites W2808649502 @default.
- W4288081307 cites W2889187528 @default.
- W4288081307 cites W2898242330 @default.
- W4288081307 cites W2903462437 @default.
- W4288081307 cites W2911220936 @default.
- W4288081307 cites W2968017642 @default.
- W4288081307 cites W2985653130 @default.
- W4288081307 cites W2997026866 @default.
- W4288081307 cites W3001587372 @default.
- W4288081307 cites W3003908700 @default.
- W4288081307 cites W3004330901 @default.
- W4288081307 cites W3034151881 @default.
- W4288081307 cites W3047346431 @default.
- W4288081307 cites W3159419921 @default.
- W4288081307 cites W3191580337 @default.
- W4288081307 cites W3201809877 @default.
- W4288081307 cites W4200062528 @default.
- W4288081307 cites W4214492190 @default.
- W4288081307 cites W4223909097 @default.
- W4288081307 cites W4229024336 @default.
- W4288081307 doi "https://doi.org/10.3390/s22155611" @default.
- W4288081307 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/35957167" @default.
- W4288081307 hasPublicationYear "2022" @default.
- W4288081307 type Work @default.
- W4288081307 citedByCount "6" @default.
- W4288081307 countsByYear W42880813072022 @default.
- W4288081307 countsByYear W42880813072023 @default.
- W4288081307 crossrefType "journal-article" @default.
- W4288081307 hasAuthorship W4288081307A5034886364 @default.
- W4288081307 hasAuthorship W4288081307A5056436780 @default.
- W4288081307 hasAuthorship W4288081307A5061493803 @default.
- W4288081307 hasAuthorship W4288081307A5069389579 @default.
- W4288081307 hasBestOaLocation W42880813071 @default.
- W4288081307 hasConcept C107457646 @default.
- W4288081307 hasConcept C111919701 @default.
- W4288081307 hasConcept C119857082 @default.
- W4288081307 hasConcept C154945302 @default.
- W4288081307 hasConcept C177264268 @default.
- W4288081307 hasConcept C199360897 @default.
- W4288081307 hasConcept C207347870 @default.
- W4288081307 hasConcept C33954974 @default.
- W4288081307 hasConcept C41008148 @default.
- W4288081307 hasConcept C45942800 @default.
- W4288081307 hasConcept C46686674 @default.
- W4288081307 hasConcept C98045186 @default.
- W4288081307 hasConceptScore W4288081307C107457646 @default.
- W4288081307 hasConceptScore W4288081307C111919701 @default.
- W4288081307 hasConceptScore W4288081307C119857082 @default.
- W4288081307 hasConceptScore W4288081307C154945302 @default.
- W4288081307 hasConceptScore W4288081307C177264268 @default.
- W4288081307 hasConceptScore W4288081307C199360897 @default.
- W4288081307 hasConceptScore W4288081307C207347870 @default.
- W4288081307 hasConceptScore W4288081307C33954974 @default.