A Plan for Developing an Auslan Communication Technologies Pipeline | SpringerLink
Skip to main content

A Plan for Developing an Auslan Communication Technologies Pipeline

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 Workshops (ECCV 2020)

Abstract

AI techniques for mainstream spoken languages have seen a great deal of progress in recent years, with technologies for transcription, translation and text processing becoming commercially available. However, no such technologies have been developed for sign languages, which, as visual-gestural languages, require multimodal processing approaches. This paper presents a plan to develop an Auslan Communication Technologies Pipeline (Auslan CTP), a prototype AI system enabling Auslan-in, Auslan-out interactions, to demonstrate the feasibility of Auslan-based machine interaction and language processing. Such a system has a range of applications, including gestural human-machine interfaces, educational tools, and translation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amir, A., et al.: DVS128 Gesture Dataset (2017). http://www.research.ibm.com/dvsgesture/

  2. Back, A.D., Angus, D., Wiles, J.: Transitive entropy—a rank ordered approach for natural sequences. IEEE J. Sel. Top. Signal Process. 14(2), 312–321 (2019). https://doi.org/10.1109/JSTSP.2019.2939998

    Article  Google Scholar 

  3. Bavelier, D., Corina, D.P., Neville, H.J.: Brain and language: a perspective from sign language. Neuron 21, 275–278 (1998)

    Article  Google Scholar 

  4. Bowden, R., Zisserman, A., Kadir, T., Brady, M.: Vision based interpretation of natural sign languages. In: Proceedings of the 3rd International Conference on Computer Vision Systems (2003). http://info.ee.surrey.ac.uk/Personal/R.Bowden/publications/icvs03/icvs03pap.pdf

  5. Brashear, H., Henderson, V., Park, K.H., Hamilton, H., Lee, S., Starner, T.: American sign language recognition in game development for deaf children. In: Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility-Assets 2006, p. 79 (2006). https://doi.org/10.1145/1168987.1169002, http://portal.acm.org/citation.cfm?doid=1168987.1169002

  6. Ceruti, M.G., et al.: Wireless communication glove apparatus for motion tracking, gesture recognition, data transmission, and reception in extreme environments. In: Proceedings of the ACM Symposium on Applied Computing, pp. 172–176 (2009). https://doi.org/10.1145/1529282.1529320

  7. da Rocha Costa, A.C., Dimuro, G.P.: SignWriting and SWML: paving the way to sign language processing. In: Traitement Automatique des Langues Naturelles (TALN). Batz-sur-Mer, France (2003)

    Google Scholar 

  8. Efthimiou, E., Sapountzaki, G., Karpouzis, K., Fotinea, S.-E.: Developing an e-learning platform for the Greek sign language. In: Miesenberger, Klaus, Klaus, Joachim, Zagler, Wolfgang L., Burger, Dominique (eds.) ICCHP 2004. LNCS, vol. 3118, pp. 1107–1113. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-27817-7_163

    Chapter  Google Scholar 

  9. Elliott, R., Glauert, J.R., Kennaway, J.R., Marshall, I., Safar, E.: Linguistic modelling and language-processing technologies for avatar-based sign language presentation. Univ. Access Inf. Soc. 6(4), 375–391 (2008). https://doi.org/10.1007/s10209-007-0102-z

    Article  Google Scholar 

  10. Hanke, T.: HamNoSys—representing sign language data in language resources and language processing contexts. In: LREC 2004, Workshop Proceedings: Representation and Processing of Sign Languages, pp. 1–6 (2004). http://www.sign-lang.uni-hamburg.de/dgs-korpus/files/inhalt_pdf/HankeLRECSLP2004_05.pdf

  11. Holden, E.J., Lee, G., Owens, R.: Australian Sign Language recognition. Mach. Vis. Appl. 16(5), 312–320 (2005). https://doi.org/10.1007/s00138-005-0003-1

    Article  Google Scholar 

  12. Huawei: StorySign: Helping Deaf Children Learn to Read (2018). https://consumer.huawei.com/au/campaign/storysign/

  13. Johnston, T.: Auslan Corpus (2008). https://elar.soas.ac.uk/Collection/MPI55247

  14. Johnston, T.: Auslan Corpus Annotation Guidelines. Technical Report, Macquarie University & La Trobe University, Sydney and Melbourne Australia (2016). http://media.auslan.org.au/attachments/Johnston_AuslanCorpusAnnotationGuidelines_February2016.pdf

  15. Johnston, T.: Wrangling and Structuring a Sign-Language Corpus: The Auslan Dictionary. Presentation at CoEDL Fest (2019)

    Google Scholar 

  16. Johnston, T., Schembri, A.: Australian Sign Language (Auslan): An Introduction to Sign Language Linguistics. Cambridge University Press, Cambridge, UK (2007)

    Book  Google Scholar 

  17. Johnston, T., Schembri, A.: Variation, lexicalization and grammaticalization in signed languages. Langage et société, 1(131), 19–35 (2010)

    Google Scholar 

  18. Kadous, M.W.: Auslan sign recognition using computers and gloves. In: Deaf Studies Research Symposium (1998). https://www.doi.org/10.1.1.51.3816, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.3816

  19. Kadous, W.: GRASP: Recognition of Australian Sign Language Using Instrumented Gloves. Ph.D. thesis, The University of New South Wales (1995)

    Google Scholar 

  20. Kipp, M., Nguyen, Q., Heloir, A., Matthes, S.: Assessing the Deaf user perspective on sign language avatars. In: The Proceedings of the 13th International ACM SIGACCESS Conference on Computers and Accessibility, Dundee, Scotland, UK, pp. 107–114. ACM (2011). https://doi.org/10.1145/2049536.2049557

  21. Lebron, J.: Recognizing Military Gestures: Developing a Gesture Recognition Interface. Technical Report, Union College, Schenectady, NY, USA (2013). http://orzo.union.edu/Archives/SeniorProjects/2013/CS.2013/

  22. Li, Y., Chen, X., Zhang, X., Wang, K., Wang, Z.J.: A sign-component-based framework for Chinese sign language recognition using accelerometer and sEMG data. IEEE Trans. Biomed. Eng. 59(10), 2695–2704 (2012). https://doi.org/10.1109/TBME.2012.2190734

    Article  Google Scholar 

  23. Liao, Y., Xiong, P., Min, W., Min, W., Lu, J.: Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks. IEEE Access,7, 38044–38054 (2019). https://doi.org/10.1109/ACCESS.2019.2904749, https://ieeexplore.ieee.org/document/8667292/

  24. Ong, S.C.W., Hsu, D., Lee, W.S., Kurniawati, H.: Partially observable Markov decision process (POMDP) technologies for sign language based human-computer interaction. In: Proceedings of the International Conference on Human-Computer Interaction (2009)

    Google Scholar 

  25. Parton, B.S.: Sign language recognition and translation: a multidisciplined approach from the field of artificial intelligence. J. Deaf Stud. Deaf Educ. 11(1), 94–101 (2006). https://doi.org/10.1093/deafed/enj003

    Article  Google Scholar 

  26. Pigou, Lionel., Dieleman, Sander., Kindermans, Pieter-Jan, Schrauwen, Benjamin: Sign language recognition using convolutional neural networks. In: Agapito, Lourdes, Bronstein, Michael M., Rother, Carsten (eds.) ECCV 2014. LNCS, vol. 8925, pp. 572–578. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-16178-5_40

    Chapter  Google Scholar 

  27. Pisharady, P.K., Saerbeck, M.: Recent methods and databases in vision-based hand gesture recognition: a review. Comput. Vis. Image Underst. 141(December), 152–165 (2015). https://doi.org/10.1016/j.cviu.2015.08.004

    Article  Google Scholar 

  28. Sahoo, A.K., Mishra, G.S., Ravulakollu, K.K.: Sign language recognition: state of the art. ARPN J. Eng. Appl. Sci. 9(2), 116–134 (2014)

    Google Scholar 

  29. Sathiyanarayanan, M., Azharuddin, S., Kumar, S., Khan, G.: Gesture controlled robot for military purpose. Int. J. Technol. Res. Eng. 1(11), 2347–4718 (2014). www.ijtre.com

  30. So, C.K.F., Baciu, G.: Entropy-based Motion Extraction for Motion Capture Animation, pp. 225–235 (2005). https://www.doi.org/10.1002/cav.107

  31. Starner, T., Pentland, A.: Visual Recognition of American Sign Language using Hidden Markov Models. In: Proceedings of the International Workshop on Automatic Face-and Gesture-Recognition, Zurich, Switzerland, pp. 189–194 (1995)

    Google Scholar 

  32. Stoll, S., Cihan Camgoz, N., Hadfield, S., Bowden, R.: Text2Sign: towards sign language production using neural machine translation and generative adversarial networks. Int. J. Comput. Vis. (2019). https://doi.org/10.1007/s11263-019-01281-2

  33. Suh, I.H., Lee, S.H., Cho, N.J., Kwon, W.Y.: Measuring motion significance and motion complexity. Inf. Sci. 388–389, 84–98 (2017). https://doi.org/10.1016/j.ins.2017.01.027

    Article  Google Scholar 

  34. Sutton, V.: What is SignWriting? https://www.signwriting.org/about/what/what02.html

  35. Twenty Billion Neurons GmbH: twentybn (2019). https://20bn.com/datasets/jester/v1

  36. University of East Anglia: Virtual Humans Research for Sign Language Animation. http://vh.cmp.uea.ac.uk/index.php/Main_Page

  37. University of Hamburg: eSign Overview. https://www.sign-lang.uni-hamburg.de/esign/overview.html

  38. Verlinden, M., Zwitserlood, I., Frowein, H.: Multimedia with animated sign language for deaf learners. In: Kommers, P., Richards, G. (eds.) World Conference on Educational Multimedia, Hypermedia and Telecommunications, Montreal, Canada, June 2005. https://www.learntechlib.org/p/20829/

  39. World Federation of the Deaf, World Association of Sign Langauge Interpreters: WFD and WASLI Statement on Use of Signing Avatars. Technical Report April, Helsinki, Finland/Melbourne, Australia (2018). https://wfdeaf.org/news/resources/wfd-wasli-statement-use-signing-avatars/

Download references

Acknowledgements

Many thanks to the project mentors who provided guidance and feedback on this paper: Professor Trevor Johnston, Dr Adam Schembri, Dr Ashfin Rahimi and Associate Professor Marcus Gallagher.

The research for this paper received funding from the Australian Government through the Defence Cooperative Research Centre for Trusted Autonomous Systems. The DCRC-TAS receives funding support from the Queensland Government.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jessica Korte .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Korte, J., Bender, A., Gallasch, G., Wiles, J., Back, A. (2020). A Plan for Developing an Auslan Communication Technologies Pipeline. In: Bartoli, A., Fusiello, A. (eds) Computer Vision – ECCV 2020 Workshops. ECCV 2020. Lecture Notes in Computer Science(), vol 12536. Springer, Cham. https://doi.org/10.1007/978-3-030-66096-3_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-66096-3_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-66095-6

  • Online ISBN: 978-3-030-66096-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics