Can an Affect-Sensitive System Afford to Be Context Independent? | SpringerLink
Skip to main content

Can an Affect-Sensitive System Afford to Be Context Independent?

  • Conference paper
  • First Online:
Modeling and Using Context (CONTEXT 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10257))

Abstract

There has been a wave of interest in affect recognition among researchers in the field of affective computing. Most of these research use a context independent approach. Since humans may misunderstand other’s observed facial, vocal, or body behavior without any contextual knowledge, we question whether any of these human-centric affect-sensitive systems can be robust enough without any contextual knowledge. To answer this question, we conducted a study using previously studied audio files in three different settings; these include: no contextual indication, one level of contextual knowledge (either action or relationship/environment), and two levels of contextual knowledge (both action and relationship/environment). Our work confirms that indeed the contextual knowledge can improve recognition of human emotion.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ekman, P., Friesen, W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Investigator’s Guide 2 Parts. Consulting Psychologists Press, Sunnyvale (1978)

    Google Scholar 

  2. Frijda, N.H.: The Emotions. Cambridge University Press, Cambridge (1986)

    Google Scholar 

  3. Scherer, K.R.: Vocal correlates of emotional arousal and affective disturbance. In: Wagner, H., Manstead, A. (eds.) Handbook of Social Psychophysiology, pp. 165–197. Wiley, New York (1989)

    Google Scholar 

  4. Parkinson, B., Fischer, A., Manstead, A.: Emotion in Social Relations: Cultural, Group, and İnterpersonal Processes. Psychology Press, New York (2005)

    Google Scholar 

  5. Kitayama, S., Markus, H.: Emotion and culture: Empirical studies of mutual influence. American Psychological Association, Washington, D.C. (1994). http://dx.doi.org/10.1037/10152-000

    Book  Google Scholar 

  6. Chung, C., Pennebaker, J.: Linguistic ınquiry and word count (LIWC): pronounced ‘‘Luke,’’… and other useful facts: In: McCarthy, P., Boonthum-Denecke, C. (eds.) Applied Natural Language Processing: Identification, Investigation and Resolution, pp. 206–229 (2012)

    Google Scholar 

  7. Picard, R.W.: Affective Computing. MIT Press, Cambridge (1997)

    Book  Google Scholar 

  8. Zeng, Z., Pantic, M., Roisman, G., Huang, T.: A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 39–58 (2009)

    Article  Google Scholar 

  9. Calvo, R.A., D’Mello, S.: Affect detection: an ınterdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010)

    Article  Google Scholar 

  10. Gunes, H., Hung, H.: Is automatic facial expression recognition of emotions coming to a dead end? The rise of the new kids on the block. Image Vis. Comput. 55, 6–8 (2016)

    Article  Google Scholar 

  11. Cheng, W.C., Liao, H.C., Pan, M.H., Chen, C.C.: A fatigue detection system with eyeglasses removal. In: 15th International Conference on Advanced Communication Technology (ICACT). IEEE (2013)

    Google Scholar 

  12. Meng, H., Huang, D., Wang, H., Yang, H., AI-Shuraifi, M., Wang, Y.: Depression recognition based on dynamic facial and vocal expression features using partial least square regression. In: Proceedings of the 3rd ACM International Workshop on Audio/Visual Emotion Challenge (2013)

    Google Scholar 

  13. Khan, R.A., Meyer, A., Konik, H., Bouakaz, S.: Pain detection through shape and appearance features. In: 2013 IEEE International Conference on Multimedia and Expo (ICME). IEEE (2013)

    Google Scholar 

  14. Bousmalis, K., Marc, M., Maja, P.: Towards the automatic detection of spontaneous agreement and disagreement based on nonverbal behaviour: a survey of related cues, databases, and tools. Image Vis. Comput. 31(2), 203–221 (2013)

    Article  Google Scholar 

  15. Cha, S., Wookhyun, K.: Analyze the learner’s concentration using detection of facial feature points (2015)

    Google Scholar 

  16. Thepade, S.D., Bidwai, P.V.: Contemplation of ımage based Iris recognition. Int. J. Eng. Res. Appl. 3(2), 1056–1066 (2013). ISSN-2248-9622

    Google Scholar 

  17. Bosch, N., Chen, Y., D’Mello, S.: It’s written on your face: detecting affective states from facial expressions while learning computer programming. In: Trausan-Matu, S., Boyer, K.E., Crosby, M., Panourgia, K. (eds.) ITS 2014. LNCS, vol. 8474, pp. 39–44. Springer, Cham (2014). doi:10.1007/978-3-319-07221-0_5

    Chapter  Google Scholar 

  18. Vasiete, E., Tom, Y.: Multimodal frustration detection on smartphones. In: Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. ACM (2015)

    Google Scholar 

  19. Brown, M., Salverda, A.P., Gunlogson, C., Tanenhaus, M.K.: Interpreting prosodic cues in discourse context. Lang. Cogn. Neurosci. 30(1-2), 149–166 (2015)

    Article  Google Scholar 

  20. Sudhakar, R., Manjare, C.A.: Analysis of speech features for emotion detection: a review. In: 2015 International Conference on Computing Communication Control and Automation (ICCUBEA). IEEE (2015)

    Google Scholar 

  21. Kakouros, S., Okko, R.: Statistical learning of prosodic patterns and reversal of perceptual cues for sentence prominence. In: Proceedings of the 38th Annual Conference of the Cognitive Science Society, Philadelphia, Pennsylvania (2016)

    Google Scholar 

  22. Laban, R., Ullmann, L.: The mastery of movement, 4th edn. Princeton Book Company Publishers, Hightstown (1988)

    Google Scholar 

  23. Scherer, K.R.: Personality inference from voice quality: the loud voice of extraversion. Eur. J. Soc. Psychol. 8, 467–487 (1978)

    Article  Google Scholar 

  24. Hess, U., Hareli, S.: The impact of context on the perception of emotions. In: The Expression of Emotion Philosophical, Psychological and Legal Perspectives

    Google Scholar 

  25. Shields, S.A.: The Politics of emotion in everyday life appropriate emotion and claims on ıdentity. Rev. Gen. Psychol. 9(1), 3–15 (2005)

    Article  Google Scholar 

  26. Algoe, S.B., Brenda, N., John, D.: Gender and job status as contextual cues for the interpretation of facial expression of emotion. Sex Roles 42(3–4), 183–208 (2000)

    Article  Google Scholar 

  27. Matsumoto, D., Hwang, H.: Judging faces in context. Soc. Pers. Psychol. Compnass 4(6), 393–402 (2010)

    Article  Google Scholar 

  28. Donges, U., Kersting, A., Suslow, T.: Women’s greater ability to perceive happy facial emotion automatically: gender differences in affective priming. PLoS ONE 7(7), e41745 (2012). doi:10.1371/journal.pone.0041745. Zalla, T. (ed.) Ecole Normale Supe’rieure, France

    Article  Google Scholar 

  29. Kitayama, S., Markus, H., Kurakawa, M.: Culture, emotion, and well-being: good feelings in Japan and the United States. Cogn. Emot. 14(1), 93–124 (2000)

    Article  Google Scholar 

  30. Hareli, S., Kafetsios, K., Hess, U.: A cross-cultural study on emotion expression and the learning of social norms. Front. Psychol. 6, 1501 (2015). http://dx.doi.org/10.3389/fpsyg.2015.01501

    Article  Google Scholar 

  31. Kitaoka, N., Enami, D., Nakagawa, S.: Effect of acoustic and linguistic contexts on human and machine speech recognition. Comput. Speech Lang. 28, 769–787 (2014)

    Article  Google Scholar 

  32. Hammal, Z., Suarez, M.T.: Towards context based affective computing. In: 2013 Humaine Association Conference on Affective Computing Intelligent Interaction, 2013, p. 802 (2013). ISBN: 9780769550480

    Google Scholar 

  33. Lazarus, R.S.: Psychological Stress and the Coping Process. McGraw Hill, New York (1966)

    Google Scholar 

  34. Scherer, K., Ellgring, H.: Multimodal expression of emotion: affect programs or componential appraisal patterns? Emotion 7, 158–171 (2007)

    Article  Google Scholar 

  35. Johnstone, T., Reekum, C, and Scherer, K.: Vocal expression correlates of appraisal processes. In: Appraisal Processes in Emotion – Theory, Methods, and Research (2001)

    Google Scholar 

  36. Kaiser, S., and Thomas W.: Facial expressions as indicators of appraisal processes. In: Appraisal Processes in Emotion: Theory, Methods, Research, pp. 285–300 (2001)

    Google Scholar 

  37. Pecchinenda, A.: The psychophysiology of appraisals (2001)

    Google Scholar 

  38. Jürgens, U.: Neural pathways underlying vocal control. Neurosci. Biobehav. Rev. 26(2), 235–258 (2002)

    Article  Google Scholar 

  39. Dey, A.K.: Understanding and using context. Pers. Ubiquit. Comput. Spec. Issue Situated Interact. Ubiquit. Comput. 5(1), 4–7 (2001)

    Google Scholar 

  40. Russell, J.A.: Core affect and the psychological construction of emotion. Psychol. Rev. 110, 145–172 (2003)

    Article  Google Scholar 

  41. Panksepp, J.: Affective Neuroscience: The Foundations of Human and Animal Emotions. Oxford University Press, Cambridge (1998)

    Google Scholar 

  42. Keltner, D., Haidt, J.: Social functions of emotions. In: Mayne, T.J., Bonanno, G.A. (eds.) Emotions: Current Issues and Future Directions, pp. 192–213. Guilford Press, New York (2001)

    Google Scholar 

  43. Bachorowski, J., Owren, M.: Vocal expression of emotion—acoustic properties of speech are associated with emotional ıntensity and context. Psychol. Sci. 6, 219–224 (1995)

    Article  Google Scholar 

  44. Bänziger, T., Mortillaro, M., Scherer, K.R.: Introducing the Geneva multimodal expression corpus for experimental research on emotion perception. Emotion (2011). Advance online publication. doi:10.1037/a0025827, http://www.affective-sciences.org/gemep/coreset

  45. Google, Inc. https://www.google.com/

  46. Marpaung, A., Gonzalez, A.: Toward building automatic affect recognition machine using acoustics features. In: FLAIRS Conference (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andreas Marpaung .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Marpaung, A., Gonzalez, A. (2017). Can an Affect-Sensitive System Afford to Be Context Independent?. In: Brézillon, P., Turner, R., Penco, C. (eds) Modeling and Using Context. CONTEXT 2017. Lecture Notes in Computer Science(), vol 10257. Springer, Cham. https://doi.org/10.1007/978-3-319-57837-8_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57837-8_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57836-1

  • Online ISBN: 978-3-319-57837-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics