Matches in SemOpenAlex for { <https://semopenalex.org/work/W4384913131> ?p ?o ?g. }
Showing items 1 to 91 of
91
with 100 items per page.
- W4384913131 endingPage "23002" @default.
- W4384913131 startingPage "22984" @default.
- W4384913131 abstract "<abstract><p>Recently, there has been increased interest in emotion recognition. It is widely utilised in many industries, including healthcare, education and human-computer interaction (HCI). Different emotions are frequently recognised using characteristics of human emotion. Multimodal emotion identification based on the fusion of several features is currently the subject of increasing amounts of research. In order to obtain a superior classification performance, this work offers a deep learning model for multimodal emotion identification based on the fusion of electroencephalogram (EEG) signals and facial expressions. First, the face features from the facial expressions are extracted using a pre-trained convolution neural network (CNN). In this article, we employ CNNs to acquire spatial features from the original EEG signals. These CNNs use both regional and global convolution kernels to learn the characteristics of the left and right hemisphere channels as well as all EEG channels. Exponential canonical correlation analysis (ECCA) is used to combine highly correlated data from facial video frames and EEG after extraction. The 1-D CNN classifier uses these combined features to identify emotions. In order to assess the effectiveness of the suggested model, this research ran tests on the DEAP dataset. It is found that Multi_Modal_1D-CNN achieves 98.9% of accuracy, 93.2% of precision, 89.3% of recall, 94.23% of F1-score and 7sec of processing time.</p></abstract>" @default.
- W4384913131 created "2023-07-21" @default.
- W4384913131 creator A5034865596 @default.
- W4384913131 creator A5071305952 @default.
- W4384913131 date "2023-01-01" @default.
- W4384913131 modified "2023-10-17" @default.
- W4384913131 title "Electroencephalogram based face emotion recognition using multimodal fusion and 1-D convolution neural network (ID-CNN) classifier" @default.
- W4384913131 cites W1661563386 @default.
- W4384913131 cites W1955614066 @default.
- W4384913131 cites W1973919178 @default.
- W4384913131 cites W2002055708 @default.
- W4384913131 cites W2035764860 @default.
- W4384913131 cites W2099019320 @default.
- W4384913131 cites W2134738818 @default.
- W4384913131 cites W2149628368 @default.
- W4384913131 cites W2799041689 @default.
- W4384913131 cites W2889717020 @default.
- W4384913131 cites W2947092242 @default.
- W4384913131 cites W3049547150 @default.
- W4384913131 cites W3101150053 @default.
- W4384913131 cites W3163907013 @default.
- W4384913131 cites W4205191859 @default.
- W4384913131 cites W4206207715 @default.
- W4384913131 cites W4210810239 @default.
- W4384913131 cites W4212858574 @default.
- W4384913131 cites W4221113116 @default.
- W4384913131 cites W4226548933 @default.
- W4384913131 cites W4295095061 @default.
- W4384913131 cites W4306167427 @default.
- W4384913131 cites W4310592224 @default.
- W4384913131 cites W4313598803 @default.
- W4384913131 cites W4319786843 @default.
- W4384913131 cites W4323642664 @default.
- W4384913131 cites W4327756935 @default.
- W4384913131 cites W4360989774 @default.
- W4384913131 cites W4361987556 @default.
- W4384913131 cites W4365790385 @default.
- W4384913131 doi "https://doi.org/10.3934/math.20231169" @default.
- W4384913131 hasPublicationYear "2023" @default.
- W4384913131 type Work @default.
- W4384913131 citedByCount "0" @default.
- W4384913131 crossrefType "journal-article" @default.
- W4384913131 hasAuthorship W4384913131A5034865596 @default.
- W4384913131 hasAuthorship W4384913131A5071305952 @default.
- W4384913131 hasBestOaLocation W43849131311 @default.
- W4384913131 hasConcept C108583219 @default.
- W4384913131 hasConcept C118552586 @default.
- W4384913131 hasConcept C153180895 @default.
- W4384913131 hasConcept C154945302 @default.
- W4384913131 hasConcept C15744967 @default.
- W4384913131 hasConcept C195704467 @default.
- W4384913131 hasConcept C206310091 @default.
- W4384913131 hasConcept C28490314 @default.
- W4384913131 hasConcept C41008148 @default.
- W4384913131 hasConcept C522805319 @default.
- W4384913131 hasConcept C52622490 @default.
- W4384913131 hasConcept C81363708 @default.
- W4384913131 hasConcept C95623464 @default.
- W4384913131 hasConceptScore W4384913131C108583219 @default.
- W4384913131 hasConceptScore W4384913131C118552586 @default.
- W4384913131 hasConceptScore W4384913131C153180895 @default.
- W4384913131 hasConceptScore W4384913131C154945302 @default.
- W4384913131 hasConceptScore W4384913131C15744967 @default.
- W4384913131 hasConceptScore W4384913131C195704467 @default.
- W4384913131 hasConceptScore W4384913131C206310091 @default.
- W4384913131 hasConceptScore W4384913131C28490314 @default.
- W4384913131 hasConceptScore W4384913131C41008148 @default.
- W4384913131 hasConceptScore W4384913131C522805319 @default.
- W4384913131 hasConceptScore W4384913131C52622490 @default.
- W4384913131 hasConceptScore W4384913131C81363708 @default.
- W4384913131 hasConceptScore W4384913131C95623464 @default.
- W4384913131 hasIssue "10" @default.
- W4384913131 hasLocation W43849131311 @default.
- W4384913131 hasOpenAccess W4384913131 @default.
- W4384913131 hasPrimaryLocation W43849131311 @default.
- W4384913131 hasRelatedWork W2059299633 @default.
- W4384913131 hasRelatedWork W2279398222 @default.
- W4384913131 hasRelatedWork W2773120646 @default.
- W4384913131 hasRelatedWork W2986507176 @default.
- W4384913131 hasRelatedWork W2995914718 @default.
- W4384913131 hasRelatedWork W3011074480 @default.
- W4384913131 hasRelatedWork W3156786002 @default.
- W4384913131 hasRelatedWork W3180630304 @default.
- W4384913131 hasRelatedWork W4299822940 @default.
- W4384913131 hasRelatedWork W564581980 @default.
- W4384913131 hasVolume "8" @default.
- W4384913131 isParatext "false" @default.
- W4384913131 isRetracted "false" @default.
- W4384913131 workType "article" @default.