default search action
5th ICMI 2003: Vancouver, British Columbia, Canada
- Sharon L. Oviatt, Trevor Darrell, Mark T. Maybury, Wolfgang Wahlster:
Proceedings of the 5th International Conference on Multimodal Interfaces, ICMI 2003, Vancouver, British Columbia, Canada, November 5-7, 2003. ACM 2003, ISBN 1-58113-621-8 - Anil K. Jain:
Multimodal user interfaces: who's the user? 1 - Sandra P. Marshall:
New techniques for evaluating innovative interfaces with eye tracking. 2 - Charles Spence:
Crossmodal attention and multisensory integration: implications for multimodal interface design. 3
Joint session with UIST
- Saied Bozorgui-Nesbat:
A system for fast, full-text entry for small electronic devices. 4-11 - Edward C. Kaiser, Alex Olwal, David McGee, Hrvoje Benko, Andrea Corradini, Xiaoguang Li, Philip R. Cohen, Steven Feiner:
Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality. 12-19
Attention and integration
- Eric Horvitz, Johnson Apacible:
Learning and reasoning about interruption. 20-27 - Sebastian Lang, Marcus Kleinehagenbrock, Sascha Hohenner, Jannik Fritsch, Gernot A. Fink, Gerhard Sagerer:
Providing the basis for human-robot-interaction: a multi-modal attention system for a mobile robot. 28-35 - Nuria Oliver, Eric Horvitz:
Selective perception policies for guiding sensing and computation in multimodal systems: a comparative analysis. 36-43 - Sharon L. Oviatt, Rachel Coulston, Stefanie Tomko, Benfang Xiao, Rebecca Lunsford, R. Matthews Wesson, Lesley Carmichael:
Toward a theory of organized multimodal integration patterns during human-computer interaction. 44-51
Haptics and biometrics
- Colin Swindells, Alex Unden, Tao Sang:
TorqueBAR: an ungrounded haptic feedback device. 52-59 - Ana Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, Kristina Höök:
Towards tangibility in gameplay: building a tangible affective interface for a computer game. 60-67 - Robert Snelick, Mike Indovina, James Yen, Alan Mink:
Multimodal biometrics: issues in design and testing. 68-72 - Bernard D. Adelstein, Durand R. Begault, Mark R. Anderson, Elizabeth M. Wenzel:
Sensitivity to haptic-audio asynchrony. 73-76 - Michael Siracusa, Louis-Philippe Morency, Kevin W. Wilson, John W. Fisher III, Trevor Darrell:
A multi-modal approach for determining speaker location and focus. 77-80 - Ken Hinckley:
Distributed and local sensing techniques for face-to-face collaboration. 81-84
Multimodal architectures and frameworks
- Tracy L. Westeyn, Helene Brashear, Amin Atrash, Thad Starner:
Georgia tech gesture toolkit: supporting experiments in gesture recognition. 85-92 - Christian Elting, Stefan Rapp, Gregor Möhler, Michael Strube:
Architecture and implementation of multimodal plug and play. 93-100 - Norbert Reithinger, Jan Alexandersson, Tilman Becker, Anselm Blocher, Ralf Engel, Markus Löckelt, Jochen Müller, Norbert Pfleger, Peter Poller, Michael Streit, Valentin Tschernomas:
SmartKom: adaptive and flexible multimodal access to multiple applications. 101-108 - Frans Flippo, Allan Meng Krebs, Ivan Marsic:
A framework for rapid development of multimodal interfaces. 109-116
User tests and multimodal gesture
- Anoop K. Sinha, James A. Landay:
Capturing user tests in a multimodal, multidevice informal prototyping tool. 117-124 - Gaolin Fang, Wen Gao, Debin Zhao:
Large vocabulary sign language recognition based on hierarchical decision trees. 125-131 - Yingen Xiong, Francis K. H. Quek, David McNeill:
Hand motion gestural oscillations and multimodal discourse. 132-139 - Kai Nickel, Rainer Stiefelhagen:
Pointing gesture recognition based on 3D-tracking of face, hands and head orientation. 140-146 - Teresa Ko, David Demirdjian, Trevor Darrell:
Untethered gesture acquisition and recognition for a multimodal conversational system. 147-150
Speech and gaze
- Manpreet Kaur, Marilyn Tremaine, Ning Huang, Joseph Wilder, Zoran Gacovski, Frans Flippo, Chandra Sekhar Mantravadi:
Where is "it"? Event Synchronization in Gaze-Speech Input Systems. 151-158 - Darrell S. Rudmann, George W. McConkie, Xianjun Sam Zheng:
Eyetracking in cognitive state detection for HCI. 159-163 - Chen Yu, Dana H. Ballard:
A multimodal learning interface for grounding spoken language in sensory perceptions. 164-171 - Dominic W. Massaro:
A computer-animated tutor for spoken and written language learning. 172-175 - Peter Gorniak, Deb Roy:
Augmenting user interfaces with adaptive speech commands. 176-179
Posters
- Thomas Käster, Michael Pfeiffer, Christian Bauckhage:
Combining speech and haptics for intuitive and efficient navigation through image databases. 180-187 - Rowel Atienza, Alexander Zelinsky:
Interactive skills using active gaze tracking. 188-195 - Yeow Kee Tan, Nasser Sherkat, Tony Allen:
Error recovery in a blended style eye gaze and speech interface. 196-202 - Kristof Van Laerhoven, Nicolas Villar, Albrecht Schmidt, Gerd Kortuem, Hans-Werner Gellersen:
Using an autonomous cube for basic navigation and input. 203-210 - Andrew Wilson, Nuria Oliver:
GWindows: robust stereo vision for gesture-based control of windows. 211-218 - Peter Gorniak, Deb Roy:
A visually grounded natural language interface for reference to spatial scenes. 219-226 - Ravikrishna Ruddarraju, Antonio Haro, Kris Nagel, Quan T. Tran, Irfan A. Essa, Gregory D. Abowd, Elizabeth D. Mynatt:
Perceptual user interfaces using vision-based eye tracking. 227-233 - Yang Li, James A. Landay, Zhiwei Guan, Xiangshi Ren, Guozhong Dai:
Sketching informal presentations. 234-241 - Jiazhi Ou, Susan R. Fussell, Xilin Chen, Leslie D. Setlock, Jie Yang:
Gestural communication over video stream: supporting multimodal interaction for remote collaborative physical tasks. 242-249 - Pernilla Qvarfordt, Arne Jönsson, Nils Dahlbäck:
The role of spoken feedback in experiencing multimodal interfaces as human-like. 250-257 - Philipp Michel, Rana El Kaliouby:
Real time facial expression recognition in video using support vector machines. 258-264 - Benfang Xiao, Rebecca Lunsford, Rachel Coulston, R. Matthews Wesson, Sharon L. Oviatt:
Modeling multimodal integration patterns and performance in seniors: toward adaptive processing of individual differences. 265-272 - Mihaela A. Zahariev, Christine L. MacKenzie:
Auditory, graphical and haptic contact cues for a reach, grasp, and place task in an augmented environment. 273-276 - Chi-Ho Chan, Michael J. Lyons, Nobuji Tetsutani:
Mouthbrush: drawing and painting by hand and mouth. 277-280 - Kouichi Katsurada, Yusaku Nakamura, Hirobumi Yamada, Tsuneo Nitta:
XISL: a language for describing multimodal interaction scenarios. 281-284 - Daniel Bauer, James D. Hollan:
IRYS: a visualization tool for temporal analysis of multimodal interaction. 285-288 - Timothy J. Hazen, Eugene Weinstein, Alex Park:
Towards robust person recognition on handheld devices using face and speaker identification technologies. 289-292 - Sarkis Abrilian, Jean-Claude Martin, Stéphanie Buisine:
Algorithms for controlling cooperation between output modalities in 2D embodied conversational agents. 293-296 - Torsten Wilhelm, Hans-Joachim Böhme, Horst-Michael Gross:
Towards an attentive robotic dialog partner. 297-300
Demos
- Shahram Payandeh, John Dill, Graham Wilson, Hui Zhang, Lilong Shi, Alan J. Lomax, Christine L. MacKenzie:
Demo: a multi-modal training environment for surgeons. 301-302 - Ana Paiva, Rui Prada, Ricardo Chaves, Marco Vala, Adrian Bullock, Gerd Andersson, Kristina Höök:
Demo: playingfFantasyA with senToy. 303-304
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.