Abstract
We have developed a one-handed character input method that allows you to use a regular smartphone as an input device. Each character is entered in two steps.
Japanese hiragana characters are divided into 10 groups of 5 characters each, and alphanumeric characters are divided into 8 groups. In the first step, you select a group by rotating your thumb around the base of your thumb. Place your thumb on the left half of the touch screen and rotate your thumb clockwise that enables to switch groups in ascending order. You can see the group name on your smart glasses. You can also reveal the group in descending order by moving your thumb counterclockwise from the right half. The displayed group is selected when you remove your thumb from the touch screen.
In the second step, one character in the group is selected by a tap or a flick. This operation is the same as the well-known flick input. However, taps are accepted anywhere in the bottom half of the screen. Flicks and taps are distinguished by the movement on the touch screen, so you can start flicks anywhere on the touch screen. In both steps, the touch position is only roughly specified. Therefore, you do not need to see your finger to operate.
In an experiment targeting beginners, the average input speed of 6 subjects was 18.4 [CPM] when each of them enters about 250 characters, and the total error rate was 3.3%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Tanaka, T., Ogawa, N., Tsuboi, R., Sagawa, Y.: One-Handed character input method for smart glasses that does not require visual confirmation of fingertip position. In: Kurosu, M. (eds.) Human-Computer Interaction. Technological Innovation. HCII 2022. Lecture Notes in Computer Science, vol. 13303, pp. 165–179. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05409-9_13
Alexa Voice Service Overview (v20160207). https://developer.amazon.com/docs/alexa-voice-service/api-overview.html. Accessed 14 Jan 2023
Use Siri on all your Apple devices. https://support.apple.com/en-us/HT204389. Accessed 14 Jan 2023
Google Assistant is better than Alexa or Siri. https://www.cnbc.com/2019/06/19/google-assistant-beats-alexa-and-siri-at-recognizing-medications.html. Accessed 14 Jan 2023
Grubert, J., et al.: Text entry in immersive head-mounted display-based virtual reality using standard keyboards. In: 25th IEEE Conference on Virtual Reality and 3D User Interfaces, VR 2018 - Proceedings, pp.159–166 (2018)
Boletsis, C., Kongsvik, S.: Text input in virtual reality: a preliminary evaluation of the drum-like VR keyboard. Technologies 7(2), 1–10 (2019)
Yu, C., et al.: Tap, dwell or gesture?: Exploring head-based text entry techniques for HMDS. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp.4479–4488 (2017)
Boletsis, C., Kongsvik, S.: Text input in virtual reality a preliminary evaluation of the drum-like VR keyboard. Technologies 7(2), 31 (2019). https://doi.org/10.3390/technologies7020031
Adhikary, J., Vertanen, K.: Typing on Midair virtual keyboards: exploring visual designs and interaction styles. In: Ardito, C., et al. (eds.) INTERACT 2021. LNCS, vol. 12935, pp. 132–151. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-85610-6_9
Fujitsu Develops Glove-Style Wearable Device. http://www.fujitsu.com/global/about/resources/news/press-releases/2014/0218-01.html. Accessed 14 Jan 2023
Fujitsu Laboratories Develops Ring-Type Wearable Device Capable of Text Input by Fingertip. https://www.fujitsu.com/global/about/resources/news/press-releases/2015/0113-01.html. Accessed 14 Jan 2023
Ring Zero. https://www.techinasia.com/ring-zero-new-start-japanese-wearable, https://www.g-mark.org/award/describe/42290?locale=en. Accessed 14 Jan 2023
Haier Asu Smartwatch, https://www.digitaltrends.com/smartwatch-reviews/haier-asu-review/. Accessed 2023/1/14
NEC develops ARmKeypad Air, a contact-free virtual keyboard for a user’s arm. https://www.nec.com/en/press/201607/global_20160713_01.html. Accessed 14 Jan 2023
Wong, P., Zhu, K., Fu, H.: FingerT9: leveraging thumb- to-finger interaction for same-side-hand text entry on smartwatches. In: Proceedings CHI 2018, Paper No.178 (2017)
Whitmier, E., et al.: DigiTouch: reconfigurable thumb-to-finger input and text entry on head- mounted Displays. In: Proceedings ACM IMWUT2017, vol.1, no.3, Article 133 (2017)
Xu, Z., et al.: TipText: eyes-free text entry on a fingertip keyboard. In: Proceedings of the 32nd ACM Symposium on User Interface Software and Technology, pp.883–899 (2019)
Tap Strap 2. https://www.wired.com/review/tap-strap-2/. Accessed 14 Jan 2023
TapXR. https://www.forbes.com/sites/charliefink/2021/10/12/tapxr-bracelet-enables-typing-without-a-keyboard/?sh=612e2b662d7f. Accessed 14 Jan 2023
Sun, K., et al.: Float: one-handed and touch-free target selection on smartwatches. In: Proceedings CHI2017, pp.692–704 (2017)
Gong, J., Yang, X., Irani, P.: WristWhirl: one-handed continuous smartwatch input using wrist gesture. In: Proceedings UIST2016, pp.861–872 (2016)
Gong, J., et al.: WrisText: one-handed text entry on smartwatch using wrist gestures. In: Proceedings CHI2018, Paper No.181 (2018)
The word list of the Balanced Corpus of Contemporary Written Japanese of the National Institute for Japanese Language and Linguistics. https://ccd.ninjal.ac.jp/bccwj/en/freq-list.html. Accessed 14 Jan 2023
Soukoreff, W., MacKenzie, S.: Metrics for text entry research: An evaluation of MSD and KSPC, and a new unified error metric. In: Proceedings of ACM CHI2003, pp.113–120 (2003)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yamada, T., Tanaka, T., Sagawa, Y. (2023). One-Handed Character Input Method Without Screen Cover for Smart Glasses that Does not Require Visual Confirmation of Fingertip Position. In: Kurosu, M., Hashizume, A. (eds) Human-Computer Interaction. HCII 2023. Lecture Notes in Computer Science, vol 14011. Springer, Cham. https://doi.org/10.1007/978-3-031-35596-7_39
Download citation
DOI: https://doi.org/10.1007/978-3-031-35596-7_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-35595-0
Online ISBN: 978-3-031-35596-7
eBook Packages: Computer ScienceComputer Science (R0)