default search action
HCI 2020: Copenhagen, Denmark
- Masaaki Kurosu:
Human-Computer Interaction. Multimodal and Natural Interaction - Thematic Area, HCI 2020, Held as Part of the 22nd International Conference, HCII 2020, Copenhagen, Denmark, July 19-24, 2020, Proceedings, Part II. Lecture Notes in Computer Science 12182, Springer 2020, ISBN 978-3-030-49061-4
Gesture-Based Interaction
- Shannon K. T. Bailey, Cheryl I. Johnson:
A Human-Centered Approach to Designing Gestures for Natural User Interfaces. 3-18 - João Bernardes:
Comparing a Mouse and a Free Hand Gesture Interaction Technique for 3D Object Manipulation. 19-37 - Bin Jiang, Xuewei Wang, Yue Wu:
Research on Gesture Interaction Design for Home Control Intelligent Terminals. 38-56 - Ahmed S. Khalaf, Sultan A. Alharthi, Ali Alshehri, Igor Dolgov, Phoebe O. Toups Dugas:
A Comparative Study of Hand-Gesture Recognition Devices for Games. 57-76 - Sara Nielsen, Lucca Julie Nellemann, Lars Bo Larsen, Kashmiri Stec:
The Social Acceptability of Peripheral Interaction with 3D Gestures in a Simulated Setting. 77-95 - Zhicheng Ren, Bin Jiang, Licheng Deng:
Research of Interactive Gesture Usability of Navigation Application Based on Intuitive Interaction. 96-105 - Kasper Rise, Ole Andreas Alsos:
Gesture-Based Ιnteraction: Visual Gesture Mapping. 106-124 - Kasper Rise, Ole Andreas Alsos:
The Potential of Gesture-Based Interaction. 125-136 - Lora Streeter, John Gauch:
Detecting Gestures Through a Gesture-Based Interface to Teach Introductory Programming Concepts. 137-153 - Yutaro Suzuki, Kodai Sekimori, Yuki Yamato, Yusuke Yamasaki, Buntarou Shizuki, Shin Takahashi:
A Mouth Gesture Interface Featuring a Mutual-Capacitance Sensor Embedded in a Surgical Mask. 154-165
Speech, Voice, Conversation and Emotions
- Justin Cheng, Wenbin Zhou, Xingyu Lei, Nicoletta Adamo, Bedrich Benes:
The Effects of Body Gestures and Gender on Viewer's Perception of Animated Pedagogical Agent's Emotions. 169-186 - Panikos Heracleous, Yasser Mohammad, Akio Yoneyama:
Integrating Language and Emotion Features for Multilingual Speech Emotion Recognition. 187-196 - Félix Le Pailleur, Bo Huang, Pierre-Majorique Léger, Sylvain Sénécal:
A New Approach to Measure User Experience with Voice-Controlled Intelligent Assistants: A Pilot Study. 197-208 - Qinglin Liao, Shanshan Zhang, Mei Wang, Jia Li, Xinrong Wang, Xuemei Deng:
Comparing the User Preferences Towards Emotional Voice Interaction Applied on Different Devices: An Empirical Study. 209-220 - Yingying Miao, Wenqian Huang, Bin Jiang:
Research on Interaction Design of Artificial Intelligence Mock Interview Application Based on Goal-Directed Design Theory. 221-233 - Jianhong Qu, Ronggang Zhou, Liming Zou, Yanyan Sun, Min Zhao:
The Effect of Personal Pronouns on Users' Emotional Experience in Voice Interaction. 234-243 - Jacqueline Urakami, Sujitra Sutthithatip, Billie Akwa Moore:
The Effect of Naturalness of Voice and Empathic Responses on Enjoyment, Attitudes and Motivation for Interacting with a Voice User Interface. 244-259 - Chen Wang, Béatrice Biancardi, Maurizio Mancini, Angelo Cafaro, Catherine Pelachaud, Thierry Pun, Guillaume Chanel:
Impression Detection and Management Using an Embodied Conversational Agent. 260-278 - Qiang Zhang:
Expectation and Reaction as Intention for Conversation System. 279-289 - Bo Zhang, Lu Xiao:
Augmented Tension Detection in Communication: Insights from Prosodic and Content Features. 290-301 - Chenyang Zhang, Ronggang Zhou, Yaping Zhang, Yanyan Sun, Liming Zou, Min Zhao:
How to Design the Expression Ways of Conversational Agents Based on Affective Experience. 302-320 - Wenbin Zhou, Justin Cheng, Xingyu Lei, Bedrich Benes, Nicoletta Adamo:
Deep Learning-Based Emotion Recognition from Real-Time Videos. 321-332
Multimodal Interaction
- Emmanuel de Salis, Marine Capallera, Quentin Meteier, Leonardo Angelini, Omar Abou Khaled, Elena Mugellini, Marino Widmer, Stefano Carrino:
Designing an AI-Companion to Support the Driver in Highly Autonomous Cars. 335-349 - Minto Funakoshi, Shun Fujita, Kaori Minawa, Buntarou Shizuki:
SilverCodes: Thin, Flexible, and Single-Line Connected Identifiers Inputted by Swiping with a Finger. 350-362 - Priyanshu Gupta, Tushar Goswamy, Himanshu Kumar, K. S. Venkatesh:
A Defocus Based Novel Keyboard Design. 363-379 - Yang Jiao, Yingqing Xu:
Affective Haptics and Multimodal Experiments Research. 380-391 - Chutisant Kerdvibulvech:
Recent Multimodal Communication Methodologies in Phonology, Vision, and Touch. 392-400 - Ahmed S. Khalaf, Sultan A. Alharthi, Bill Hamilton, Igor Dolgov, Son Tran, Phoebe O. Toups Dugas:
A Framework of Input Devices to Support Designing Composite Wearable Computers. 401-427 - Mandy Korzetz, Romina Kühn, Lukas Büschel, Franz-Wilhelm Schumann, Uwe Aßmann, Thomas Schlegel:
Introducing Mobile Device-Based Interactions to Users: An Investigation of Onboarding Tutorials. 428-442 - Marleny Luque Carbajal, M. Cecília C. Baranauskas:
Multimodal Analysis of Preschool Children's Embodied Interaction with a Tangible Programming Environment. 443-462 - Takuto Nakamura, Buntarou Shizuki:
Identification Method of Digits for Expanding Touchpad Input. 463-474 - Arshad Nasser, Taizhou Chen, Can Liu, Kening Zhu, P. V. M. Rao:
FingerTalkie: Designing a Low-Cost Finger-Worn Device for Interactive Audio Labeling of Tactile Diagrams. 475-496 - Matthew Peveler, Jeffrey O. Kephart, Xiangyang Mou, Gordon Clement, Hui Su:
A Virtual Mouse Interface for Supporting Multi-user Interactions. 497-508 - Alen Salkanovic, Ivan Stajduhar, Sandi Ljubic:
Floating Hierarchical Menus for Swipe-Based Navigation on Touchscreen Mobile Devices. 509-522 - Yuta Takayama, Yuu Ichikawa, Takumi Kitagawa, Song Shengmei, Buntarou Shizuki, Shin Takahashi:
Touch Position Detection on the Front of Face Using Passive High-Functional RFID Tag with Magnetic Sensor. 523-531
Human Robot Interaction
- Sebastian Büttner, Rami Zaitoon, Mario Heinz, Carsten Röcker:
One-Hand Controller for Human-Drone Interaction - a Human-Centered Prototype Development. 535-548 - Piercosma Bisconti Lucidi, Susanna Piermattei:
Sexual Robots: The Social-Relational Approach and the Concept of Subjective Reference. 549-559 - Hans-Jürgen Buxbaum, Sumona Sen, Ruth Häusler:
Theses on the Future Design of Human-Robot Collaboration. 560-579 - André Diogo, Hande Ayanoglu, Júlia Teles, Emília Duarte:
Trust on Service Robots: A Pilot Study on the Influence of Eyes in Humanoid Robots During a VR Emergency Egress. 580-591 - Peter Forbrig, Alexandru-Nicolae Bundea:
Modelling the Collaboration of a Patient and an Assisting Humanoid Robot During Training Tasks. 592-602 - John R. Grosh, Michael A. Goodrich:
Multi-human Management of Robotic Swarms. 603-619 - Akihiro Hamada, Atsuro Sawada, Jin Kono, Masanao Koeda, Katsuhiko Onishi, Takashi Kobayashi, Toshinari Yamasaki, Takahiro Inoue, Hiroshi Noborio, Osamu Ogawa:
The Current Status and Challenges in Augmented-Reality Navigation System for Robot-Assisted Laparoscopic Partial Nephrectomy. 620-629 - Roland Hausser:
Database Semantics for Talking Autonomous Robots. 630-643 - Yushun Kajihara, Peeraya Sripian, Feng Chen, Midori Sugaya:
Emotion Synchronization Method for Robot Facial Expression. 644-653 - Lisanne Kremer, Sumona Sen, Monika Eigenstetter:
Human-Robot Interaction in Health Care: Focus on Human Factors. 654-667 - Andreas Mallas, Michalis Xenos, Maria Rigou:
Evaluating a Mouse-Based and a Tangible Interface Used for Operator Intervention on Two Autonomous Robots. 668-678 - Mitsuharu Matsumoto:
On Positive Effect on Humans by Poor Operability of Robot. 679-687 - Anna C. S. Medeiros, Photchara Ratsamee, Yuki Uranishi, Tomohiro Mashita, Haruo Takemura:
Human-Drone Interaction: Using Pointing Gesture to Define a Target Object. 688-705 - Tracy Pham, Dante Tezza, Marvin Andujar:
Enhancing Drone Pilots' Engagement Through a Brain-Computer Interface. 706-718 - Sumona Sen, Hans-Jürgen Buxbaum, Lisanne Kremer:
The Effects of Different Robot Trajectories on Situational Awareness in Human-Robot Collaboration. 719-729
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.