Leveraging Sign Language Processing with Formal SignWriting and Deep Learning Architectures | SpringerLink
Skip to main content

Leveraging Sign Language Processing with Formal SignWriting and Deep Learning Architectures

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2023)

Abstract

Advances in sign language processing have not adequately kept pace with the tremendous progress that has been made in oral language processing. This fact serves as motivation for conducting research on the potential utilization of deep learning models within the domain of sign language processing. In this paper, we present a method that utilizes deep learning to build a latent and generalizable representation space for signs, leveraging Formal SignWriting notation and the concept of sentence-based representation to effectively address sign language tasks, such as sign classification. Extensive experiments demonstrate the potential of this method, achieving an average accuracy of \(81\%\) on a subset of 70 signs with only 889 training data and \(69\%\) on a subset of 338 signs with 3, 871 training data.

The authors of this work would like to thank the Center for Artificial Intelligence (C4AI-USP) and the support from the São Paulo Research Foundation (FAPESP grant #2019/07665-4) and from the IBM Corporation. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8464
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://github.com/fernandoafreitas/bracis_2023_signwriting.

  2. 2.

    Translation to English: “The girl who fell from bike? ... She is in the hospital!”.

  3. 3.

    Mapping for English words: <GIRL FELL BIKE> ... <SHE THERE HOSPITAL>.

  4. 4.

    Position symbols were not found in the databases used in this paper.

  5. 5.

    https://www.signbank.org/.

  6. 6.

    Detailed information about the regular expression is available at https://datatracker.ietf.org/doc/html/draft-slevinski-signwriting-text-05#section-2.3.

  7. 7.

    Accessed on 2023-01-31.

  8. 8.

    The same sign can be represented in multiple ways, either due to slight variations in the execution of the gesture or by taking contextual factors into account during the signaling process.

  9. 9.

    https://www.tensorflow.org/.

References

  1. de Almeida Freitas, F.: Reconhecimento automático de expressões faciais gramaticais na língua brasileira de sinais. Master’s thesis, Universidade de São Paulo, Brasil (2015)

    Google Scholar 

  2. de Almeida Freitas, F., Peres, S.M., de Moraes Lima, C.A., Barbosa, F.V.: Grammatical facial expressions recognition with machine learning. In: Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, pp. 180–185 (2014)

    Google Scholar 

  3. Barros, M.E.: ELis-Escrita das Línguas de Sinais: Proposta teórica e verificação prática. Ph.D. thesis, Tese (Doutorado em Linguística)-Universidade Federal de Santa Catarina (2008)

    Google Scholar 

  4. Bébian, A.: Mimographie, ou essai d’écriture mimique propre á régulariser le langage des sourds-muets. L. Colas (1825)

    Google Scholar 

  5. Bertoldi, N., et al.: On the creation and the annotation of a large-scale Italian-LIS parallel corpus. In: Proceedings of 7th International Conference on Language Resources and Evaluation, Valletta, Malta, pp. 19–22. European Language Resources Association (2010)

    Google Scholar 

  6. Bilge, Y.C., Ikizler-Cinbis, N., Cinbis, R.G.: Zero-shot sign language recognition: can textual data uncover sign languages? arXiv preprint arXiv:1907.10292 (2019)

  7. Camgoz, N.C., Koller, O., Hadfield, S., Bowden, R.: Sign language transformers: joint end-to-end sign language recognition and translation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10023–10033 (2020)

    Google Scholar 

  8. Capovilla, F., Raphael, W., Viggiano, K., Neves, S., Luz, R.: Sign writing: implicações psicológicas e sociológicas de uma escrita visual direta de sinais, e de seus usos na educação do surdo. Revista Espaço 33–39 (2000)

    Google Scholar 

  9. De Araújo Cardoso, M.E., Peres, S., De Almeida Freitas, F., Venância Barbosa, F., De Moraes Lima, C.A., Hung, P.: Automatic segmentation of grammatical facial expressions in sign language: towards an inclusive communication experience. In: Proceedings of the 53rd Hawaii International Conference on System Science, pp. 1499–1508 (2020)

    Google Scholar 

  10. De Coster, M., et al.: Frozen pretrained transformers for neural sign language translation. In: Proceedings of the 1st International Workshop on Automatic Translation for Signed and Spoken Languages, pp. 88–97. Association for Machine Translation in the Americas (2021)

    Google Scholar 

  11. Escalera, S., et al.: Multi-modal gesture recognition challenge 2013: dataset and results. In: Proceedings of the 15th ACM on International Conference on Multimodal Interaction, pp. 445–452. ACM, New York (2013)

    Google Scholar 

  12. Farooq, U., Rahim, M.S.M., Sabir, N., Hussain, A., Abid, A.: Advances in machine translation for sign language: approaches, limitations, and challenges. Neural Comput. Appl. 33(21), 14357–14399 (2021)

    Article  Google Scholar 

  13. Forster, J., et al.: RWTH-PHOENIX-weather: a large vocabulary sign language recognition and translation corpus. In: International Conference on Language Resources and Evaluation, Istanbul, Turkey, vol. 9, pp. 3785–3789. European Language Resources Association (2012)

    Google Scholar 

  14. Freitas, F.A., Peres, S.M., Lima, C.A., Barbosa, F.V.: Grammatical facial expression recognition in sign language discourse: a study at the syntax level. Inf. Syst. Front. 19, 1243–1259 (2017)

    Article  Google Scholar 

  15. Goodfellow, I., Bengio, Y., Courville, A.: Convolutional networks. In: Deep Learning, vol. 2016, pp. 330–372. MIT Press, Cambridge (2016)

    Google Scholar 

  16. Hanke, T.: Hamnosys-representing sign language data in language resources and language processing contexts. In: 4th International Conference on Language Resources and Evaluation, vol. 4, pp. 1–6 (2004)

    Google Scholar 

  17. Haykin, S.: Neural Networks and Learning Machines, 3rd edn. Pearson, London (2009)

    Google Scholar 

  18. Hu, H., Zhao, W., Zhou, W., Wang, Y., Li, H.: Signbert: pre-training of hand-model-aware representation for sign language recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 11087–11096 (2021)

    Google Scholar 

  19. Karnopp, L.B.: Aquisição fonológica na língua brasileira de sinais: estudo longitudinal de uma criança surda. Ph.D. thesis, Universidade Federal Do Rio Grande do Sul (UFRGS) (1999)

    Google Scholar 

  20. Koller, O., Forster, J., Ney, H.: Continuous sign language recognition: towards large vocabulary statistical recognition systems handling multiple signers. Comput. Vis. Image Underst. 141, 108–125 (2015)

    Article  Google Scholar 

  21. Koller, O., Ney, H., Bowden, R.: May the force be with you: force-aligned signwriting for automatic subunit annotation of corpora. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, pp. 1–6. IEEE (2013)

    Google Scholar 

  22. Li, D., Opazo, C.R., Yu, X., Li, H.: Word-level deep sign language recognition from video: a new large-scale dataset and methods comparison. In: Proceedings of the IEEE Winter Conference on Applications of Computer Vision, pp. 1459–1469 (2020)

    Google Scholar 

  23. Luong, M.T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal. ACL (2015)

    Google Scholar 

  24. Nève, F.X.: Essai de grammaire de la langue des signes française, vol. 271. Librairie Droz (1996)

    Google Scholar 

  25. Polat, K., Saraclar, M.: Unsupervised term discovery for continuous sign language. In: Proceedings of the 9th Workshop on the Representation and Processing of Sign Languages: Sign Language Resources in the Service of the Language Community, Technological Challenges and Application Perspectives, Marseille, France, pp. 189–196. European Language Resources Association (2020)

    Google Scholar 

  26. Rastgoo, R., Kiani, K., Escalera, S.: Sign language recognition: a deep survey. Expert Syst. Appl. 164, 113794 (2021)

    Article  Google Scholar 

  27. Rastgoo, R., Kiani, K., Escalera, S., Sabokrou, M.: Multi-modal zero-shot sign language recognition. arXiv preprint arXiv:2109.00796 (2021)

  28. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)

  29. Shahroudy, A., Liu, J., Ng, T.T., Wang, G.: NTU RGB+ D: a large scale dataset for 3D human activity analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1010–1019 (2016)

    Google Scholar 

  30. Stiehl, D., Addams, L., Oliveira, L.S., Guimarães, C., Britto, A.: Towards a signwriting recognition system. In: 13th International Conference on Document Analysis and Recognition, pp. 26–30. IEEE (2015)

    Google Scholar 

  31. Stokoe, W.C.: Sign language structure. Annu. Rev. Anthropol. 9(1), 365–390 (1980)

    Article  Google Scholar 

  32. Sung, J., Ponce, C., Selman, B., Saxena, A.: Unstructured human activity detection from RGBD images. In: IEEE International Conference on Robotics and Automation, pp. 842–849. IEEE (2012)

    Google Scholar 

  33. Sutton, V.: Signwriting. Sl: sn, p. 9 (2009)

    Google Scholar 

  34. Tunga, A., Nuthalapati, S.V., Wachs, J.P.: Pose-based sign language recognition using GCN and BERT. In: IEEE Winter Conference on Applications of Computer Vision Workshops, pp. 31–40 (2021)

    Google Scholar 

  35. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  36. Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1290–1297. IEEE (2012)

    Google Scholar 

  37. Wolf, T., et al.: Huggingface’s transformers: state-of-the-art natural language processing. arXiv preprint arXiv:1910.03771 (2019)

  38. World Health Organization: World report on hearing. Technical report, World Health Organization (2021)

    Google Scholar 

  39. Zhou, Z., Tam, V.W., Lam, E.Y.: SignBERT: a BERT-based deep learning framework for continuous sign language recognition. IEEE Access 9, 161669–161682 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fernando de Almeida Freitas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

de Almeida Freitas, F., Peres, S.M., de Paula Albuquerque, O., Fantinato, M. (2023). Leveraging Sign Language Processing with Formal SignWriting and Deep Learning Architectures. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14197. Springer, Cham. https://doi.org/10.1007/978-3-031-45392-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45392-2_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45391-5

  • Online ISBN: 978-3-031-45392-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics