Publication:
Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions

dc.contributor.authorGÜLER, SONGÜL ERDEM
dc.contributor.authorAKBULUT, FATMA PATLAR
dc.date.accessioned2025-05-13T08:22:06Z
dc.date.issued2025
dc.description.abstractDespite advances in the field of emotion recognition, the research field still faces two main limitations: the use of deep models for increasingly complex calculations and the identification of emotions through various data types. This study aims to advance the knowledge on multimodal emotion recognition by combining electroencephalography (EEG) signals with facial expressions, using advanced models such as Transformer, Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU). The results validate the effectiveness of this approach, demonstrating the high accuracy of the Gated Recurrent Unit (GRU) model, which achieved an average of 91.8% classification accuracy on unimodal (EEG-only) data and an average of 97.8% classification accuracy on multimodal (EEG and facial expressions) datasets in the multi-class emotion categories. The findings emphasize that by applying a multi-class classification framework, multimodal approaches offer significant improvements over traditional unimodal techniques. This work presents a framework that captures complex neural dynamics and visible emotional cues, enhancing the robustness and accuracy of emotion recognition systems. These results have important practical implications, showing how integrating various data sources with advanced models can overcome the limitations of single-modality systems.en
dc.identifier13
dc.identifier.citationS. Erdem Güler and F. Patlar Akbulut, "Multimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions," in IEEE Access, vol. 13, pp. 24587-24603.
dc.identifier.issn2169-3536
dc.identifier.scopus2-s2.0-85217527778
dc.identifier.urihttps://doi.org/10.1109/ACCESS.2025.3538642
dc.identifier.urihttps://hdl.handle.net/11413/9565
dc.identifier.wos001420279300024
dc.language.isoen
dc.publisherIEEE
dc.relation.journalIEEE Access
dc.rightsinfo:eu-repo/semantics/openAccess
dc.rightsAttribution 3.0 United Statesen
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/
dc.subjectDeep Learning
dc.subjectEEG Signals
dc.subjectEmotion Recognition
dc.subjectFacial Expressions
dc.subjectHuman Computer Interaction
dc.titleMultimodal Emotion Recognition: Emotion Classification Through the Integration of EEG and Facial Expressions
dc.typeArticle
dspace.entity.typePublication
local.indexed.atWOS
local.indexed.atScopus
local.journal.endpage24603
local.journal.startpage24587
relation.isAuthorOfPublication16c815c6-a2cb-439b-b155-9ca020f8cc04
relation.isAuthorOfPublication.latestForDiscovery16c815c6-a2cb-439b-b155-9ca020f8cc04

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Tam Metin/Full Text
Size:
1.48 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
1.81 KB
Format:
Item-specific license agreed upon to submission
Description: