Real Time Static Gesture Detection Using Deep Learning | SpringerLink
Skip to main content

Real Time Static Gesture Detection Using Deep Learning

  • Conference paper
  • First Online:
Big Data Analytics (BDA 2019)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11932))

Included in the following conference series:

Abstract

Sign gesture recognition is an important problem in human-computer interaction with significant societal influence. However, it is a very complex task, since sign gestures are naturally deformable objects. Gesture recognition contains unsolved problems for the last two decades, such as low accuracy or low speed, and despite many proposed methods, no perfect result has been found to explain these unsolved problems. In this paper, we propose a deep learning approach to translating sign gesture language into text. In this study, we have introduced a self-generated image data set for American Sign language (ASL). This dataset is a collection of 36 characters containing A to Z alphabets and 0 to 9 number digits. The proposed system can recognize static gestures. This system can learn and classify specific sign gestures of any person. A convolutional neural network (CNN) algorithm is proposed for classifying ASL images to text. An accuracy of 99% on the alphabet gestures and 100% accuracy on digits was achieved. This is the best accuracy compared to existing systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. WHO calls on private sector to provide affordable hearing aids in developing world [Internet]. Current neurology and neuroscience reports. U.S. National Library of Medicine (2001). https://www.ncbi.nlm.nih.gov/pubmed/11887302. Accessed 22 Jan 2019

  2. Emond, A., Ridd, M., Sutherland, H., et al.: The current health of the signing Deaf community in the UK compared with the general population: a cross-sectional study. BMJ Open 5, e006668 (2015). https://doi.org/10.1136/bmjopen-2014-006668

    Article  Google Scholar 

  3. Das, K., Singha, J.: Hand gesture recognition based on Karhunen-Loeve transform, pp. 365–371 (2013)

    Google Scholar 

  4. Aryanie, D., Heryadi, Y.: American sign language-based finger-spelling recognition using k-Nearest Neighbors classifier. In: 2015 3rd International Conference on Information and Communication Technology (ICoICT), pp. 533–536 (2015)

    Google Scholar 

  5. Sharma, R., Nemani, Y., Kumar, S., Kane, L., Khanna, P.: Recognition of single handed sign language gestures using contour tracing descriptor. In: Lecture Notes in Engineering and Computer Science, vol. 2 (2013)

    Google Scholar 

  6. Starner, T., Pentland, A.: Real-time American Sign Language recognition from video using hidden Markov models. In: Proceedings of International Symposium on Computer Vision - ISCV, Coral Gables, FL, USA, pp. 265–270 (1995). https://doi.org/10.1109/iscv.1995.477012

  7. Jebali, M., Dalle, P., Jemni, M.: Extension of hidden Markov model for recognizing large vocabulary of sign language. Int. J. Artif. Intell. Appl. 4 (2013). https://doi.org/10.5121/ijaia.2013.4203

    Article  Google Scholar 

  8. Suk, H.-I., Sin, B.-K., Lee, S.-W.: Hand gesture recognition based on dynamic Bayesian network framework. Pattern Recognit. 43, 3059–3072 (2010). https://doi.org/10.1016/j.patcog.2010.03.016

    Article  Google Scholar 

  9. Vicars, W.: ASL University. ASL [Internet]. Children of Deaf Adults (CODA). http://www.lifeprint.com/. Accessed 29 Jan 2019

  10. Aran, O., Keskin, C., Akarun, L.: Computer applications for disabled people and sign language tutoring. In: Proceedings of the Fifth GAP Engineering Congress, pp. 26–28 (2006)

    Google Scholar 

  11. Tokatlı, H., Halıcı, Z.: 3D hand tracking in video sequences. MSc thesis, September 2005

    Google Scholar 

  12. He, J., Zhang, H.: A real time face detection method in human-machine interaction. In: 2008 2nd International Conference on Bioinformatics and Biomedical Engineering (2008)

    Google Scholar 

  13. Zhu, Q., Wu, C.-T., Cheng, K.-T., Wu, Y.-L.: An adaptive skin model and its application to objectionable image filtering. In: Proceedings of the 12th Annual ACM International Conference on Multimedia - MULTIMEDIA 2004 (2004)

    Google Scholar 

  14. Kelly, W., Donnellan, A., Molloy, D.: Screening for objectionable images: a review of skin detection techniques. In: 2008 International Machine Vision and Image Processing Conference, pp. 151–158 (2008)

    Google Scholar 

  15. Zarit, B., Super, B., Quek, F.: Comparison of five color models in skin pixel classification. In: Proceedings International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems in Conjunction with ICCV 1999, pp. 58–63 (Cat No PR00378) (1999)

    Google Scholar 

  16. Ford, A., Roberts, A.: Color space conversions. Westminster University, London, UK (1998)

    Google Scholar 

  17. Gonzalez, R., Woods, R., Eddins, S.: Digital Image Processing Using MATLAB. Englewood Cliffs, NJ (2004)

    Google Scholar 

  18. Hughes, J.F.: Computer Graphics: Principles and Practice. Addison-Wesley, Upper Saddle River (2014)

    Google Scholar 

  19. Nallaperumal, K., et al.: Skin detection using color pixel classification with application to face detection: a comparative study. In: Proceedings of IEEE International Conference on Computational Intelligence and Multimedia Applications, vol. 3, pp. 436–441 (2007)

    Google Scholar 

  20. Greenspan, H., Goldberger, J., Eshet, I.: Mixture model for face-color modeling and segmentation. Pattern Recogn. Lett. 22(14), 1525–1536 (2001)

    Article  Google Scholar 

  21. Mitra, S., Acharya, T.: Gesture recognition: a survey. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 37(3), 311–324 (2007)

    Article  Google Scholar 

  22. Nagi, J., et al.: Max-pooling convolutional neural networks for vision-based hand gesture recognition. In: 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA) (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kalpdrum Passi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Passi, K., Goswami, S. (2019). Real Time Static Gesture Detection Using Deep Learning. In: Madria, S., Fournier-Viger, P., Chaudhary, S., Reddy, P. (eds) Big Data Analytics. BDA 2019. Lecture Notes in Computer Science(), vol 11932. Springer, Cham. https://doi.org/10.1007/978-3-030-37188-3_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37188-3_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37187-6

  • Online ISBN: 978-3-030-37188-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics