Matches in SemOpenAlex for { <https://semopenalex.org/work/W4308332509> ?p ?o ?g. }
- W4308332509 endingPage "8467" @default.
- W4308332509 startingPage "8467" @default.
- W4308332509 abstract "Emotion recognition, or the ability of computers to interpret people's emotional states, is a very active research area with vast applications to improve people's lives. However, most image-based emotion recognition techniques are flawed, as humans can intentionally hide their emotions by changing facial expressions. Consequently, brain signals are being used to detect human emotions with improved accuracy, but most proposed systems demonstrate poor performance as EEG signals are difficult to classify using standard machine learning and deep learning techniques. This paper proposes two convolutional neural network (CNN) models (M1: heavily parameterized CNN model and M2: lightly parameterized CNN model) coupled with elegant feature extraction methods for effective recognition. In this study, the most popular EEG benchmark dataset, the DEAP, is utilized with two of its labels, valence, and arousal, for binary classification. We use Fast Fourier Transformation to extract the frequency domain features, convolutional layers for deep features, and complementary features to represent the dataset. The M1 and M2 CNN models achieve nearly perfect accuracy of 99.89% and 99.22%, respectively, which outperform every previous state-of-the-art model. We empirically demonstrate that the M2 model requires only 2 seconds of EEG signal for 99.22% accuracy, and it can achieve over 96% accuracy with only 125 milliseconds of EEG data for valence classification. Moreover, the proposed M2 model achieves 96.8% accuracy on valence using only 10% of the training dataset, demonstrating our proposed system's effectiveness. Documented implementation codes for every experiment are published for reproducibility." @default.
- W4308332509 created "2022-11-11" @default.
- W4308332509 creator A5013355721 @default.
- W4308332509 creator A5015314974 @default.
- W4308332509 creator A5044032604 @default.
- W4308332509 creator A5053522689 @default.
- W4308332509 creator A5076451838 @default.
- W4308332509 date "2022-11-03" @default.
- W4308332509 modified "2023-09-27" @default.
- W4308332509 title "M1M2: Deep-Learning-Based Real-Time Emotion Recognition from Neural Activity" @default.
- W4308332509 cites W2002055708 @default.
- W4308332509 cites W2165857685 @default.
- W4308332509 cites W2599124244 @default.
- W4308332509 cites W2604936044 @default.
- W4308332509 cites W2731964405 @default.
- W4308332509 cites W2771734292 @default.
- W4308332509 cites W2800938746 @default.
- W4308332509 cites W2803881474 @default.
- W4308332509 cites W2810418809 @default.
- W4308332509 cites W2889105179 @default.
- W4308332509 cites W2909533222 @default.
- W4308332509 cites W2932628637 @default.
- W4308332509 cites W2936712672 @default.
- W4308332509 cites W2938736450 @default.
- W4308332509 cites W2944401411 @default.
- W4308332509 cites W2947658250 @default.
- W4308332509 cites W2960600329 @default.
- W4308332509 cites W2962905870 @default.
- W4308332509 cites W2963568316 @default.
- W4308332509 cites W2981004543 @default.
- W4308332509 cites W2981372722 @default.
- W4308332509 cites W2982299617 @default.
- W4308332509 cites W2983840038 @default.
- W4308332509 cites W2997560618 @default.
- W4308332509 cites W2998500327 @default.
- W4308332509 cites W3003207095 @default.
- W4308332509 cites W3005864656 @default.
- W4308332509 cites W3009120439 @default.
- W4308332509 cites W3009814846 @default.
- W4308332509 cites W3014215018 @default.
- W4308332509 cites W3014658201 @default.
- W4308332509 cites W3016167515 @default.
- W4308332509 cites W3016775848 @default.
- W4308332509 cites W3020487153 @default.
- W4308332509 cites W3027581678 @default.
- W4308332509 cites W3038474676 @default.
- W4308332509 cites W3047434002 @default.
- W4308332509 cites W3082894964 @default.
- W4308332509 cites W3083218890 @default.
- W4308332509 cites W3084484668 @default.
- W4308332509 cites W3089148108 @default.
- W4308332509 cites W3095937415 @default.
- W4308332509 cites W3100777112 @default.
- W4308332509 cites W3102822077 @default.
- W4308332509 cites W3108087271 @default.
- W4308332509 cites W3108484628 @default.
- W4308332509 cites W3108564553 @default.
- W4308332509 cites W3109961563 @default.
- W4308332509 cites W3110327404 @default.
- W4308332509 cites W3116615529 @default.
- W4308332509 cites W3118932394 @default.
- W4308332509 cites W3119911037 @default.
- W4308332509 cites W3123409499 @default.
- W4308332509 cites W3126625480 @default.
- W4308332509 cites W3128898807 @default.
- W4308332509 cites W3138409313 @default.
- W4308332509 cites W3155739706 @default.
- W4308332509 cites W3157999215 @default.
- W4308332509 cites W3160343815 @default.
- W4308332509 cites W3188833721 @default.
- W4308332509 cites W3193300679 @default.
- W4308332509 cites W3193682252 @default.
- W4308332509 cites W3195488783 @default.
- W4308332509 cites W3195508803 @default.
- W4308332509 cites W3205803445 @default.
- W4308332509 cites W3207550576 @default.
- W4308332509 cites W3209683092 @default.
- W4308332509 cites W3210145349 @default.
- W4308332509 cites W3216927580 @default.
- W4308332509 cites W4200101550 @default.
- W4308332509 cites W4206290592 @default.
- W4308332509 cites W4210266523 @default.
- W4308332509 cites W4211211720 @default.
- W4308332509 cites W4213199906 @default.
- W4308332509 cites W4214492190 @default.
- W4308332509 cites W4229021892 @default.
- W4308332509 cites W4282596231 @default.
- W4308332509 cites W4284690152 @default.
- W4308332509 cites W4284959866 @default.
- W4308332509 cites W4289110083 @default.
- W4308332509 cites W4291743749 @default.
- W4308332509 cites W4293009030 @default.
- W4308332509 cites W4293661173 @default.
- W4308332509 cites W4293776887 @default.
- W4308332509 doi "https://doi.org/10.3390/s22218467" @default.
- W4308332509 hasPubMedId "https://pubmed.ncbi.nlm.nih.gov/36366164" @default.
- W4308332509 hasPublicationYear "2022" @default.
- W4308332509 type Work @default.