EEG-based emotion recognition with deep convolutional neural networks

Ozdemir M. A., Degirmenci M., Izci E., Akan A.

BIOMEDICAL ENGINEERING-BIOMEDIZINISCHE TECHNIK, vol.66, no.1, pp.43-57, 2021 (SCI-Expanded) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 66 Issue: 1
  • Publication Date: 2021
  • Doi Number: 10.1515/bmt-2019-0306
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, BIOSIS, Biotechnology Research Abstracts, EMBASE, INSPEC, MEDLINE
  • Page Numbers: pp.43-57
  • Keywords: azimuthal equidistant projection technique, brain mapping, deep learning, EEG images, electroencephalogram, emotion estimation, CLASSIFICATION, SIGNALS, MODELS
  • Van Yüzüncü Yıl University Affiliated: No


The emotional state of people plays a key role in physiological and behavioral human interaction. Emotional state analysis entails many fields such as neuroscience, cognitive sciences, and biomedical engineering because the parameters of interest contain the complex neuronal activities of the brain. Electroencephalogram (EEG) signals are processed to communicate brain signals with external systems and make predictions over emotional states. This paper proposes a novel method for emotion recognition based on deep convolutional neural networks (CNNs) that are used to classify Valence, Arousal, Dominance, and Liking emotional states. Hence, a novel approach is proposed for emotion recognition with time series of multi-channel EEG signals from a Database for Emotion Analysis and Using Physiological Signals (DEAP). We propose a new approach to emotional state estimation utilizing CNN-based classification of multi-spectral topology images obtained from EEG signals. In contrast to most of the EEG-based approaches that eliminate spatial information of EEG signals, converting EEG signals into a sequence of multi-spectral topology images, temporal, spectral, and spatial information of EEG signals are preserved. The deep recurrent convolutional network is trained to learn important representations from a sequence of three-channel topographical images. We have achieved test accuracy of 90.62% for negative and positive Valence, 86.13% for high and low Arousal, 88.48% for high and low Dominance, and finally 86.23% for like-unlike. The evaluations of this method on emotion recognition problem revealed significant improvements in the classification accuracy when compared with other studies using deep neural networks (DNNs) and one-dimensional CNNs.