Abstract
Every human continuously interacts with his environment and its entities. To interact with the environment, humans use language and physical expression to understand the events and be understood. These communication methods are natural features acquired at birth, with a few exceptions. Unfortunately, some people face interaction difficulties because of disabilities or illnesses. To remedy to these problems, researchers have been designing assistance robots which can imitate human interaction using multiple modalities. To do so, the robot must be able to interact with humans using natural methods used by people such as speech, gestures, eye movements, etc. The robot must be able to understand and execute the commands issued by the user through the different modalities. To do so, we propose a smart system that will use a knowledge base to achieve the three tasks of “sensing-understanding-acting” in an ambient environment.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Bolt, R.A.: “Put-that-there”: Voice and gesture at the graphics interface. J. ACM SIGGRAPH Comput. Graph. 14, 262–270 (1980)
Novak, D., Riener, R.: A survey of sensor fusion methods in wearable robotics. J. Robot. Autonom. Syst. (2014)
Axenie, C., Conradt, J.: Cortically inspired sensor fusion network for mobile robot egomotion estimation. J. Robot. Autonom. Syst. (2014)
Reddy, B.S., Basir, O.A.: Concept-based evidential reasoning for multimodal fusion in human-computer interaction. J. App. Soft. Comp. 10, 567–577 (2010)
Berghöfer, E., Schulze, D., Rauch, C., Tscherepanow, M., Köhler, T., Wachsmuth, S.: ART-based fusion of multi-modal perception for robots. J. Neurocomputing 107, 11–22 (2013)
Perperis, T., Giannakopoulos, T., Makris, A., Kosmopoulos, D.I., Tsekeridou, S., Perantonis, S.J., Theodoridis, S.: Multimodal and ontology-based fusion approaches of audio and visual processing for violence detection in movies. J. Expert syst. with Applic. 38, 14102–14116 (2011)
Wehbi, A., Zaguia, A., Ramdane-Cherif, A., Tadj, C.: Multimodal Fusion for Interaction Systems. J. Emerg. Trends. Comp. Info. Sciences 4, 445–458 (2013)
Arlyn Toolworks. http://www.osbornej.com/ChairBotHome.html
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Djaid, N.T., Saadia, N., Ramdane-Cherif, A. (2015). Ontology Based Fusion Engine for Interaction with an Intelligent Assistance Robot. In: Tan, Y., Shi, Y., Buarque, F., Gelbukh, A., Das, S., Engelbrecht, A. (eds) Advances in Swarm and Computational Intelligence. ICSI 2015. Lecture Notes in Computer Science(), vol 9140. Springer, Cham. https://doi.org/10.1007/978-3-319-20466-6_62
Download citation
DOI: https://doi.org/10.1007/978-3-319-20466-6_62
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-20465-9
Online ISBN: 978-3-319-20466-6
eBook Packages: Computer ScienceComputer Science (R0)