Cascading Global and Sequential Temporal Representations with Local Context Modeling for EEG-Based Emotion Recognition | SpringerLink
Skip to main content

Cascading Global and Sequential Temporal Representations with Local Context Modeling for EEG-Based Emotion Recognition

  • Conference paper
  • First Online:
Pattern Recognition (ICPR 2024)

Abstract

Electroencephalogram (EEG)-based emotion recognition is an emerging research area in brain-computer interface (BCI) providing a direct window into one’s cognitive states. Recent studies employ deep learning models such as a convolutional neural network (CNN), a long short-term memory (LSTM), and the Transformer owing to their high performances achieved for EEG-based emotion recognition. Despite their significant research outcomes, individual networks have their respective limitations in their modeling capabilities. To learn complementary feature representations, we cascade global and sequential temporal representations with local context modeling by unifying CNN, Transformer and LSTM into one framework. To verify the effectiveness of our proposed model, we conducted extensive comparative experiments on two popular benchmark datasets for EEG-based emotion recognition, i.e., SEED-IV, and DEAP, in which we bring further improvements over the recent state-of-the-art models. Our code is publicly available at: https://github.com/affctivai/ConTL.

This work was supported in part by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (RS-2023-00229074, RS-2022-00155915), in part by the National Research Foundation of Korea (NRF) grant funded by the Korea government (No. 2021R1C1C2012437), and in part by INHA UNIVERSITY Research Grant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hazarika, D., Zimmermann, R., Poria, S.: Misa: modality-invariant and-specific representations for multimodal sentiment analysis. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 1122–1131. Association for Computing Machinery, New York, United States (2020)

    Google Scholar 

  2. Singh, G.V., Firdaus, M., Chauhan, D.S., Ekbal, A., Bhattacharyya, P.: Zero-shot multitask intent and emotion prediction from multimodal data: a benchmark study. Neurocomputing 569(127128) (2024)

    Google Scholar 

  3. Damasio, A.R.: Descartes’ Error: Emotion, Reason, and The Human Brain, 1st edn. Avon Books, New York (1995)

    Google Scholar 

  4. Andayani, F., Theng, L.B., Tsun, M.T., Chua, C.: Hybrid LSTM-transformer model for emotion recognition from speech audio files. IEEE Access 10, 36018–36027 (2022)

    Article  Google Scholar 

  5. Zhao, Z., et al.: Combining a parallel 2D CNN with a self-attention Dilated Residual Network for CTC-based discrete speech emotion recognition. Neural Netw. 141, 52–60 (2021)

    Article  Google Scholar 

  6. Zheng, W.L., Zhu, J.Y., Lu, B.L.: Identifying stable patterns over time for emotion recognition from EEG. IEEE Trans. Affect. Comput. 10(3), 417–429 (2017)

    Article  Google Scholar 

  7. Krizhevsky, A., Sutskever, I., Hinton, G. E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google Scholar 

  8. Lawhern, V.J., Solon, A.J., Waytowich, N.R., Gordon, S.M., Hung, C.P., Lance, B.J.: EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces. J. Neural Eng. 15(5), 056013 (2018)

    Article  Google Scholar 

  9. Rudakov, E., et al.: Multi-Task CNN model for emotion recognition from EEG Brain maps. In: 4th International Conference on Bio-Engineering for Smart Technologies, pp. 1–4. IEEE, Paris, France (2021)

    Google Scholar 

  10. Yang, Y., Wu, Q., Fu, Y., Chen, X.: Continuous convolutional neural network with 3D input for EEG-based emotion recognition. In: Cheng, L., Leung, A., Ozawa, S. (eds.) Neural Information Processing. ICONIP 2018, LNCS, vol. 11307, pp. 433–443. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04239-4_39

  11. Song, T., Zheng, W., Song, P., Cui, Z.: EEG emotion recognition using dynamical graph convolutional neural networks. IEEE Trans. Affect. Comput. 11(3), 532–541 (2020)

    Article  Google Scholar 

  12. Song, Y., Zheng, Q., Liu, B., Gao, X.: EEG conformer: convolutional transformer for EEG decoding and visualization. IEEE Trans. Neural Syst. Rehabil. Eng. 31, 710–719 (2022)

    Article  Google Scholar 

  13. Li, X., et al.: EEG based emotion recognition: a tutorial and review. ACM Comput. Surv. 55(4), 1–57 (2022)

    Article  Google Scholar 

  14. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  15. Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q., Salakhutdinov, R.: Transformer-XL: attentive language models beyond a fixed-length context. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 2978–2988. Association for Computational Linguistics, Florence (2019)

    Google Scholar 

  16. Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008)

    Google Scholar 

  17. Wang, X.W., Nie, D., Lu, B.L.: Emotional state classification from EEG data using machine learning approach. Neurocomputing 129, 94–106 (2014)

    Article  Google Scholar 

  18. Ekman, P.: An argument for basic emotions. Cogn. Emot. 6(3–4), 169–200 (1992)

    Article  Google Scholar 

  19. Wu, X., Zheng, W.L., Lu, B.L.: Identifying functional brain connectivity patterns for EEG-based emotion recognition. In: 9th International IEEE/EMBS Conference on Neural Engineering, pp. 235–238. IEEE, San Francisco, USA (2019)

    Google Scholar 

  20. Li, P., Liu, H., Si, Y., Li, C., Li, F., Zhu, X., et al.: EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Trans. Biomed. Eng. 66(10), 2869–2881 (2019)

    Article  Google Scholar 

  21. Kim, B.H., Choi, J.W., Lee, H., Jo, S.: A discriminative SPD feature learning approach on Riemannian manifolds for EEG classification. Pattern Recognit. 143 (2023)

    Google Scholar 

  22. Zheng, W.L., Liu, W., Lu, Y., Lu, B.L., Cichocki, A.: Emotionmeter: a multimodal framework for recognizing human emotions. IEEE Trans. Cybern. 49(3), 1110–1122 (2018)

    Article  Google Scholar 

  23. Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., et al.: DEAP: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2011)

    Article  Google Scholar 

  24. Sainath, T.N., Vinyals, O., Senior, A., Sak, H.: Convolutional, long short-term memory, fully connected deep neural networks. In: 2015 IEEE International Conference on Acoustics. Speech and Signal Processing, pp. 4580–4584. IEEE, South Brisbane, Australia (2015)

    Google Scholar 

  25. Shi, X., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.C.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 28. Curran Associates Inc., Montreal, Canada (2015)

    Google Scholar 

  26. Kim, B.H., Jo, S.: Deep physiological affect network for the recognition of human emotions. IEEE Trans. Affect. Comput. 11(2), 230–243 (2018)

    Google Scholar 

  27. Li, X., Song, D., Zhang, P., Yu, G., Hou, Y., Hu, B.: Emotion recognition from multi-channel EEG data through convolutional recurrent neural network. In: 2016 IEEE International Conference on Bioinformatics and Biomedicine, pp. 352–359. IEEE, Shenzhen, China (2016)

    Google Scholar 

  28. Yang, Y., Wu, Q., Qiu, M., Wang, Y., Chen, X.: Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network. In: 2018 International Joint Conference on Neural Networks, pp. 1–7. IEEE, Rio de Janeiro, Brazil (2018)

    Google Scholar 

  29. Zhang, D., et al.: Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface. In: Williams, B., Chen, Y., Neville, J. (eds.) Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, AAAI Press, Washington, DC, USA (2018). https://doi.org/10.1609/aaai.v32i1.11496

  30. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  Google Scholar 

  31. Zhong, P., Wang, D., Miao, C.: EEG-based emotion recognition using regularized graph neural networks. IEEE Trans. Affect. Comput. 13(3), 1290–1301 (2022)

    Article  Google Scholar 

  32. Zheng, W.L., Lu, B.L.: Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks. IEEE Trans. Auton. Ment. Dev. 7(3), 162–175 (2015)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Byung Hyung Kim .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kang, H., Choi, J.W., Kim, B.H. (2025). Cascading Global and Sequential Temporal Representations with Local Context Modeling for EEG-Based Emotion Recognition. In: Antonacopoulos, A., Chaudhuri, S., Chellappa, R., Liu, CL., Bhattacharya, S., Pal, U. (eds) Pattern Recognition. ICPR 2024. Lecture Notes in Computer Science, vol 15313. Springer, Cham. https://doi.org/10.1007/978-3-031-78201-5_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-78201-5_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-78200-8

  • Online ISBN: 978-3-031-78201-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics