XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI | SpringerLink
Skip to main content

XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2024)

Abstract

Gesture recognition systems offering contactless human-machine interaction have diverse applications, from smart homes to healthcare. However, they often face challenges from unexpected changes in user behavior and a lack of explainability, especially concerning fields like medical diagnosis or security systems. To address these issues, we introduce a novel approach that exploits advances in Explainable Artificial Intelligence (AI) and Experience Replay techniques for human-centric AI in radar-based gesture sensing. Our contributions include model calibration via Transfer Learning using Experience Replay and feedback on anomalous gestures through feature analysis with Explainable AI. Experimental results show improved accuracy, low forgetting rate, and enhanced user engagement, suggesting the potential for fostering trust in AI technology. The model calibration leads to an average accuracy improvement of 5.4% with respect to the uncalibrated model. Furthermore, leveraging the Explainable AI feedback to enhance gesture execution yields a 38.1% average accuracy improvement compared to unguided user behavior.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9151
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11439
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

References

  1. Wan, Q., Li, Y., Li, C., Pal, R.: Gesture recognition for smart home applications using portable radar sensors. In: 2014 36th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2014. 2014, 6414–7 (2014). https://doi.org/10.1109/EMBC.2014.6945096

  2. Wang, W., He, M., Wang, X., Ma, J., Song, H.: Medical gesture recognition method based on improved lightweight network. Appl. Sci. 12, 6414 (2022). https://doi.org/10.3390/app12136414

    Article  Google Scholar 

  3. Kabisha, M.S., Rahim, K.A., Khaliluzzaman, M., Khan, S.I.: Face and hand gesture recognition based person identification system using convolutional neural network. Int. J. Intell. Syst. Appl. Eng. 10, 105–115 (2022). https://doi.org/10.18201/ijisae.2022.273

  4. Cui, P., Athey, S.: Stable learning establishes some common ground between causal inference and machine learning. Nat. Mach. Intell. 4, 110–115 (2022). https://doi.org/10.1038/s42256-022-00445-z

    Article  Google Scholar 

  5. Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Advances in Neural Information Processing Systems. Curran Associates, Inc. (2017)

    Google Scholar 

  6. Castelvecchi, D.: Can we open the black box of AI? Nature News. 538, 20 (2016). https://doi.org/10.1038/538020a

    Article  Google Scholar 

  7. Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies – ScienceDirect. https://www.sciencedirect.com/science/article/pii/S0004370221000102

  8. Petch, J., Di, S., Nelson, W.: Opening the black box: the promise and limitations of explainable machine learning in cardiology. Can. J. Cardiol. 38, 204–213 (2022). https://doi.org/10.1016/j.cjca.2021.09.004

    Article  Google Scholar 

  9. Weber, P., Carl, K.V., Hinz, O.: Applications of explainable artificial intelligence in finance—a systematic review of finance, information systems, and computer science literature. Manag Rev Q. (2023). https://doi.org/10.1007/s11301-023-00320-0

    Article  Google Scholar 

  10. Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI (2019). http://arxiv.org/abs/1910.10045

  11. Mueller, S.T., Hoffman, R.R., Clancey, W., Emrey, A.: Explanation in human-AI systems: a literature meta-review synopsis of key ideas and publications and bibliography for explainable AI

    Google Scholar 

  12. Krause, J., Perer, A., Ng, K.: Interacting with predictions: visual inspection of black-box machine learning models. In: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose California USA, pp. 5686–5697. ACM (2016)

    Google Scholar 

  13. Wang, Q., Huang, K., Chandak, P., Zitnik, M., Gehlenborg, N.: Extending the nested model for user-centric XAI: a design study on GNN-based drug repurposing. IEEE Trans. Visual Comput. Graphics 29, 1266–1276 (2023). https://doi.org/10.1109/TVCG.2022.3209435

    Article  Google Scholar 

  14. Wang, D., Yang, Q., Abdul, A., Lim, B.Y.: Designing theory-driven user-centric explainable AI. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow Scotland UK, pp. 1–15. ACM (2019)

    Google Scholar 

  15. Slack, D., Hilgard, S., Jia, E., Singh, S., Lakkaraju, H.: Fooling LIME and SHAP: adversarial attacks on post hoc explanation methods. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, New York NY USA, pp. 180–186. ACM (2020)

    Google Scholar 

  16. Baia, A.E., Poggioni, V., Cavallaro, A.: Black-box attacks on image activity prediction and its natural language explanations. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Paris, France, pp. 3688–3697. IEEE (2023)

    Google Scholar 

  17. Malinin, A., et al.: Shifts 2.0: Extending The Dataset of Real Distributional Shifts (2022). http://arxiv.org/abs/2206.15407

  18. Dolopikos, C., Pritchard, M., Bird, J.J., Faria, D.R.: Electromyography signal-based gesture recognition for human-machine interaction in real-time through model calibration. In: Arai, K. (ed.) Advances in Information and Communication, pp. 898–914. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-73103-8_65

    Chapter  Google Scholar 

  19. Chen, X., Wang, S., Fu, B., Long, M., Wang, J.: Catastrophic forgetting meets negative transfer: batch spectral shrinkage for safe transfer learning. In: Advances in Neural Information Processing Systems. Curran Associates, Inc. (2019)

    Google Scholar 

  20. Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., Wayne, G.: Experience replay for continual learning. In: Advances in Neural Information Processing Systems. Curran Associates, Inc. (2019)

    Google Scholar 

  21. Zhang, B.-B., Zhang, D., Li, Y., Hu, Y., Chen, Y.: Unsupervised domain adaptation for device-free gesture recognition (2021). http://arxiv.org/abs/2111.10602

  22. Liu, H., et al.: mTransSee: enabling environment-independent mmWave sensing based gesture recognition via transfer learning. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 6, 23:1–23:28 (2022). https://doi.org/10.1145/3517231

  23. Strobel, M., Schoenfeldt, S., Daugalas, J.: Gesture Recognition for FMCW Radar on the Edge (2023). http://arxiv.org/abs/2310.08876

  24. Shapley, L.S.: 17. A Value for n-person games. In: 17. A Value for n-Person Games, pp. 307–318. Princeton University Press (2016)

    Google Scholar 

  25. Sundararajan, M., Taly, A., Yan, Q.: Axiomatic attribution for deep networks. In: Proceedings of the 34th International Conference on Machine Learning. pp. 3319–3328. PMLR (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sarah Seifi .

Editor information

Editors and Affiliations

Appendix

Appendix

figure a

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seifi, S., Sukianto, T., Strobel, M., Carbonelli, C., Servadei, L., Wille, R. (2024). XentricAI: A Gesture Sensing Calibration Approach Through Explainable and User-Centric AI. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science, vol 2155. Springer, Cham. https://doi.org/10.1007/978-3-031-63800-8_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63800-8_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63799-5

  • Online ISBN: 978-3-031-63800-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics