A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments | SpringerLink
Skip to main content

A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2024)

Abstract

Investigating children’s embodied learning in mixed-reality environments, where they collaboratively simulate scientific processes, requires analyzing complex multimodal data to interpret their learning and coordination behaviors. Learning scientists have developed Interaction Analysis (IA) methodologies for analyzing such data, but this requires researchers to watch hours of videos to extract and interpret students’ learning patterns. Our study aims to simplify researchers’ tasks, using Machine Learning and Multimodal Learning Analytics to support the IA processes. Our study combines machine learning algorithms and multimodal analyses to support and streamline researcher efforts in developing a comprehensive understanding of students’ scientific engagement through their movements, gaze, and affective responses in a simulated scenario. To facilitate an effective researcher-AI partnership, we present an initial case study to determine the feasibility of visually representing students’ states, actions, gaze, affect, and movement on a timeline. Our case study focuses on a specific science scenario where students learn about photosynthesis. The timeline allows us to investigate the alignment of critical learning moments identified by multimodal and interaction analysis, and uncover insights into students’ temporal learning progressions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12583
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 15729
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    None of the names used are the students’ true names.

References

  1. Abdelrahman, A.A., Hempel, T., Khalifa, A., Al-Hamadi, A.: L2cs-net: fine-grained gaze estimation in unconstrained environments. In: 2023 8th International Conference on Frontiers of Signal Processing (ICFSP), pp. 98–102 (2022)

    Google Scholar 

  2. Andrade, A.: Understanding student learning trajectories using multimodal learning analytics within an embodied-interaction learning environment. In: Proceedings of the Seventh International Learning Analytics & Knowledge Conference (2017)

    Google Scholar 

  3. Ashwin, T., Guddeti, R.M.R.: Affective database for e-learning and classroom environments using Indian students’ faces, hand gestures and body postures. Futur. Gener. Comput. Syst. 108, 334–348 (2020)

    Article  Google Scholar 

  4. Bhat, S.F., Birkl, R., Wofk, D., Wonka, P., Müller, M.: Zoedepth: zero-shot transfer by combining relative and metric depth (2023)

    Google Scholar 

  5. Danish, J., et al.: Designing for shifting learning activities. J. Appl. Instruct. Des. 11(4), 169–185 (2022)

    Google Scholar 

  6. Danish, J.A., Enyedy, N., Saleh, A., Humburg, M.: Learning in embodied activity framework: a sociocultural framework for embodied cognition. Int. J. Comput.-Support. Collab. Learn. 15, 49–87 (2020)

    Article  Google Scholar 

  7. Davalos, E., Timalsina, U., Zhang, Y., Wu, J., Fonteles, J.H., Biswas, G.: Chimerapy: a scientific distributed streaming framework for real-time multimodal data retrieval and processing. In: 2023 IEEE International Conference on Big Data (BigData). IEEE (2023)

    Google Scholar 

  8. Davalos, E., et al.: Identifying gaze behavior evolution via temporal fully-weighted scanpath graphs. In: LAK23: 13th International Learning Analytics and Knowledge Conference, pp. 476–487. Association for Computing Machinery (2023)

    Google Scholar 

  9. D’Mello, S., Graesser, A.: Dynamics of affective states during complex learning. Learn. Instr. 22(2), 145–157 (2012)

    Article  Google Scholar 

  10. Enyedy, N., Danish, J.: Learning physics through play and embodied reflection in a mixed reality learning environment. In: Learning Technologies and the Body, pp. 97–111. Routledge (2014)

    Google Scholar 

  11. Errea, J., Gestalten (eds.): Visual journalism. Die Gestalten Verlag (2017)

    Google Scholar 

  12. Ez-zaouia, M., Tabard, A., Lavoué, E.: Emodash: a dashboard supporting retrospective awareness of emotions in online learning. Int. J. Hum.-Comput. Stud. 139, 102411 (2020)

    Article  Google Scholar 

  13. Hall, R., Stevens, R.: Interaction analysis approaches to knowledge in use. In: Knowledge and Interaction, pp. 88–124. Routledge (2015)

    Google Scholar 

  14. Hervé, N., Letessier, P., Derval, M., Nabi, H.: Amalia.js: an open-source metadata driven html5 multimedia player. In: Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, pp. 709–712. ACM (2015)

    Google Scholar 

  15. Kellnhofer, P., Recasens, A., Stent, S., Matusik, W., Torralba, A.: Gaze360: physically unconstrained gaze estimation in the wild. In: IEEE International Conference on Computer Vision (ICCV) (2019)

    Google Scholar 

  16. Kersting, M., Haglund, J., Steier, R.: A growing body of knowledge: on four different senses of embodiment in science education. Sci. Educ. 30(5), 1183–1210 (2021)

    Article  Google Scholar 

  17. Lane, A., Lee, S., Enyedy, N.: Embodied resources for connective and productive disciplinary engagement [poster]. In: AERA Annual Meeting. American Educational Research Association (2024)

    Google Scholar 

  18. Li, T.H., Suzuki, H., Ohtake, Y.: Visualization of user’s attention on objects in 3D environment using only eye tracking glasses. J. Comput. Des. Eng. 7(2), 228–237 (2020)

    Google Scholar 

  19. Martinez-Maldonado, R., Echeverria, V., Santos, O.C., Santos, A.D., Yacef, K.: Physical learning analytics: a multimodal perspective. In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge, pp. 375–379 (2018)

    Google Scholar 

  20. Pekrun, R., Stephens, E.J.: Academic emotions, p. 3–31. American Psychological Association (2012)

    Google Scholar 

  21. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)

    Article  Google Scholar 

  22. Savchenko, A.V., Savchenko, L.V., Makarov, I.: Classifying emotions and engagement in online learning based on a single facial expression recognition neural network. IEEE Trans. Affect. Comput. 13, 2132–2143 (2022)

    Article  Google Scholar 

  23. Schwendimann, B.A., et al.: Perceiving learning at a glance: a systematic literature review of learning dashboard research. IEEE Trans. Learn. Technol. 10(1), 30–41 (2017)

    Article  Google Scholar 

  24. Steinberg, S., Zhou, M., Vickery, M., Mathayas, N., Danish, J.: Making sense of modes in collective embodied science activities. In: Proceedings of the 17th International Conference of the Learning Sciences-ICLS 2023, pp. 1218–1221. International Society of the Learning Sciences (2023)

    Google Scholar 

  25. Tang, S., Andriluka, M., Andres, B., Schiele, B.: Multiple people tracking by lifted multicut and person re-identification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3539–3548 (2017)

    Google Scholar 

  26. TS, A., Guddeti, R.M.R.: Automatic detection of students’ affective states in classroom environment using hybrid convolutional neural networks. Educ. Inf. Technol. 25(2), 1387–1415 (2020)

    Google Scholar 

  27. Vatral, C., Biswas, G., Cohn, C., Davalos, E., Mohammed, N.: Using the dicot framework for integrated multimodal analysis in mixed-reality training environments. Front. Artif. Intell. 5, 941825 (2022)

    Article  Google Scholar 

  28. Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported by the following grants from the National Science Foundation (NSF): DRL-2112635, IIS-1908632 and IIS-1908791. The authors have no known conflicts of interest to declare. We would like to thank all of the students and teachers who participated in this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joyce Fonteles .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fonteles, J. et al. (2024). A First Step in Using Machine Learning Methods to Enhance Interaction Analysis for Embodied Learning Environments. In: Olney, A.M., Chounta, IA., Liu, Z., Santos, O.C., Bittencourt, I.I. (eds) Artificial Intelligence in Education. AIED 2024. Lecture Notes in Computer Science(), vol 14830. Springer, Cham. https://doi.org/10.1007/978-3-031-64299-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-64299-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-64298-2

  • Online ISBN: 978-3-031-64299-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics