Abstract
The traditional broadcast TV viewing experience has barely evolved since its inception, remaining mostly static despite many technical advances. Smart TVs show attempts of filling this gap, but present challenges, such as limiting functionalities to specific models and lack of standardization. Privacy concerns arise as smart TVs connect to advertising and monitoring services. In the spectrum of interactivity, an option that stands out is affective computing, an interdisciplinary field that seeks to develop systems capable of recognizing, expressing and responding to human emotions. This work proposes the incorporation of affective computing techniques and concepts to improve the experience and interactivity with digital TV, naming it “Affective TV”. The work presents a modular architecture, recognition modules developed for multiple modes of interaction and a fully operational implementation of the architecture, developed for the standard digital TV middleware in Brazil, Ginga. Affective TV uses audio and video capturing devices and allows users to set up their environments. Recognition modules capture and classify data, communicating directly to the TV middleware. Proof-of-concept applications, incorporating voice and hand pose interactions with facial expression recognition, were evaluated using the GQM. UEQ-S and TAM questionnaires were employed. Very positive results were obtained, including an excellent UEQ rating, showcasing technical feasibility, attractiveness, user experience, perceived usefulness, and ease of use. The proposal enriches the digital TV experience, providing a novel, interactive model with user-centric customization and emotion-driven responses.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
Twitch.tv emotes are images used by the streamers and their audience to express emotions on the chat. Emotes are comparable to emojis, although many of them are personalized.
- 3.
References
Basili, V.R.: Goal, question, metric paradigm. Encycl. Softw. Eng. 1, 528–532 (1994)
Bullington, J.: Affective computing and emotion recognition systems: the future of biometric surveillance? In: Proceedings of the 2nd Annual Conference On Information Security Curriculum Development, pp. 95–99 (2005)
Caldiera, V.R.B.G., Rombach, H.D.: The goal question metric approach. Encycl. Softw. Eng., 528–532 (1994)
Cohn, J.F., De la Torre, F.: Automated face analysis for affective computing (2015)
Davis, F.D.: Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q., 319–340 (1989)
Hu, P.J., Chau, P.Y., Sheng, O.R.L., Tam, K.Y.: Examining the technology acceptance model using physician acceptance of telemedicine technology. J. Manage. Inf. Syst. 16(2), 91–112 (1999)
Hunkeler, U., Truong, H.L., Stanford-Clark, A.: MQTT-S-a publish/subscribe protocol for wireless sensor networks. In: 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops (COMSWARE’08), pp. 791–798. IEEE (2008)
Kobs, et al.: Emote-controlled: obtaining implicit viewer feedback through emote-based sentiment analysis on comments of popular twitch.tv channels. Trans. Soc. Comput. 3(2) (2020). https://doi.org/10.1145/3365523
Kukula, E.P., Elliott, S.J.: Evaluation of a facial recognition algorithm across three illumination conditions. IEEE Aerosp. Electron. Syst. Mag. 19(9), 19–23 (2004)
Laugwitz, B., Held, T., Schrepp, M.: Construction and evaluation of a user experience questionnaire. In: Holzinger, A. (ed.) USAB 2008. LNCS, vol. 5298, pp. 63–76. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89350-9_6
Likert, R.: A technique for the measurement of attitudes. Arch. Psychol., 136–165 (1932). https://books.google.com.br/books?id=9rotAAAAYAAJ
Lisetti, C.L., Rumelhart, D.E.: Facial expression recognition using a neural network. In: FLAIRS Conference, pp. 328–332 (1998)
Ma, L., Khorasani, K.: Facial expression recognition using constructive feedforward neural networks. IEEE Trans. Syst. Man Cybern. Part B Cybern. 34(3), 1588–1595 (2004)
McDuff, D., et al.: Affdex SDK: a cross-platform real-time multi-face expression recognition toolkit. In: Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 3723–3726 (2016)
Mondragon, V.M., García-Díaz, V., Porcel, C., Crespo, R.G.: Adaptive contents for interactive tv guided by machine learning based on predictive sentiment analysis of data. Soft. Comput. 22(8), 2731–2752 (2018)
Mpiperis, I., Malassiotis, S., Strintzis, M.G.: Bilinear models for 3-D face and facial expression recognition. IEEE Trans. Inf. Forensics Secur. 3(3), 498–511 (2008)
Picard, R.W.: Affective Computing. MIT Press, Cambridge, MA, USA (1997)
Picard, R.W.: Affective computing for HCI. In: HCI, vol. 1, pp. 829–833. Citeseer (1999)
Revina, I., Emmanuel, W.S.: A survey on human face expression recognition techniques. J. King Saud Univ. Comput. Info. Sci. 33(6), 619–628 (2021). https://doi.org/10.1016/j.jksuci.2018.09.002
Schrepp, M., Hinderks, A., Thomaschewski, J.: Design and evaluation of a short version of the user experience questionnaire (UEQ-s). Int. J. Interact. Multimedia Artif. Intell. 4(6), 103–108 (2017)
Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. J. Optical Soc. Am. A 4(3), 519–524 (1987)
Soares, L.F.G., Rodrigues, R.F., Moreno, M.F.: Ginga-NCL: the declarative environment of the Brazilian digital tv system. J. Braz. Comput. Soc. 13, 37–46 (2007)
Luo, J. (ed.): Affective Computing and Intelligent Interaction. AISC, vol. 137. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27866-2
Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Proceedings. 1991 IEEE Computer Society Conference On Computer Vision And Pattern Recognition, pp. 586–587. IEEE Computer Society (1991)
Valentim, P.A., Barreto, F., Muchaluat-Saade, D.C.: Towards affective tv with facial expression recognition. In: Proceedings of the 1st Life Improvement in Quality by Ubiquitous Experiences Workshop. SBC (2021)
Vasilescu, M.A.O., Terzopoulos, D.: Multilinear image analysis for facial recognition. In: Object Recognition Supported By User Interaction For Service Robots, vol. 2, pp. 511–514 (2002)
Zhang, P.: The affective response model: a theoretical framework of affective concepts and their relationships in the ICT context. MIS Q., 247–274 (2013)
Acknowledgements
The authors would like to thank CAPES, CAPES PRINT, CNPq and FAPERJ for the partial financial support of this work.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Valentim, P., Muchaluat-Saade, D. (2024). Affective TV: Concepts of Affective Computing Applied to Digital Television. In: Marcus, A., Rosenzweig, E., Soares, M.M. (eds) Design, User Experience, and Usability. HCII 2024. Lecture Notes in Computer Science, vol 14716. Springer, Cham. https://doi.org/10.1007/978-3-031-61362-3_16
Download citation
DOI: https://doi.org/10.1007/978-3-031-61362-3_16
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-61361-6
Online ISBN: 978-3-031-61362-3
eBook Packages: Computer ScienceComputer Science (R0)