Abstract
Advances in autonomy have the potential to reshape the landscape of the modern world. Yet, research on human-machine interaction is needed to better understand the dynamic exchanges required between humans and machines in order to optimize human reliance on novel technologies. A key aspect of that exchange involves the notion of transparency as humans and machines require shared awareness and shared intent for optimal team work. Questions remain however, regarding how to represent information in order to generate shared awareness and intent in a human-machine context. The current paper will review a recent model of human-robot transparency and will propose a number of methods to foster transparency between humans and machines.
Chapter PDF
Similar content being viewed by others
References
Veloso, M., Aisen, M., Howard, A., Jenkins, C., Mutlu, B., Scassellati, B.: WTEC Panel Report on Human-Robot Interaction Japan, South Korea, and China. World Technology Evaluation Center, Inc., Arlington (2012)
Ososky, S., Schuster, D., Phillips, E., Jentsch, F.: Building appropriate trust in human-robot teams. In: Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, pp. 60–65. AAAI, Palo Alto (2013)
Groom, V., Nass, C.: Can robots be teammates? Benchmarks in human-robot teams. Interaction Studies 8(3), 483–500 (2007)
Nass, C., Moon, Y.: Machines and mindlessness: Social responses to computers. J. of Social Issues 56, 81–103 (2000)
Salas, E., Cooke, N.J., Rosen, M.A.: On teams, teamwork, and team performance: Discoveries and developments. Human Factors 50(3), 540–547 (2008)
Cohen, S.G., Bailey, D.E.: What makes teams work: group effectiveness research from the shop floor to the executive suite. J. of Management 23, 239–290 (1997)
Stubbs, K., Wettergreen, D., Hinds, P.J.: Autonomy and common ground in human-robot interaction: A field study. IEEE Intelligent Systems, 42–50 (2007)
Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory control of multiple robots: Human performance issues and user interface design. IEEE Transactions on Systems, Man, and Cybernetics – Part C: Applications and Reviews 41(4), 435–454 (2011)
Lyons, J.B.: Being transparent about transparency: A model for human-robot interaction. In: Proceedings of AAAI Spring Symposium on Trust in Autonomous Systems, pp. 48–53. AAAI, Palo Alto (2013)
Wang, L., Jamieson, G.A., Hollands, J.G.: Trust and reliance on an automated combat identification system. Human Factors 51, 281–291 (2009)
Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. of Human-Computer Studies 58, 697–718 (2003)
Kim, T., Hinds, P.: Who should I blame? Effects of autonomy and transparency on attributions in human-robot interactions. In: Proceedings of the 15th International Symposium on Robot and Human Interactive Communication (RO-MAN 2006), pp. 80–85. IEEE, Hatfield (2006)
Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., Ewart, J., Parasuraman, R.: A meta-analysis of factors affecting trust in human-robot interaction. Human Factors 53(5), 517–527 (2011)
Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integration model of organizational trust. Academy of Management Review 20, 709–734 (1995)
Fischer, K.: How people talk with robots: Designing dialogue to reduce user uncertainty. AI Magazine, 31–38 (2011)
Eyssel, F., Hegel, F.: (S)he’s got the look: Gender stereotyping of robots. J. of App. Soc. Psych. 42(9), 2213–2230 (2012)
Broadbent, E., Kumar, V., Li, X., Sollers, J., Stafford, R.Q., MacDonald, B.A., Wegner, D.M.: Robots with displays screens: A robot with more humanlike face display is perceived to have more mind and a better personality. PLoS ONE 8(8), e72589 (2013)
Breazeal, C., Aryananda, L.: Recognition of affective communicative intent in robot directed speech. Autonomous Robots 12(1), 83–104 (2002)
Arkin, R.C., Ulam, P., Wagner, A.R.: Moral decision-making in autonomous systems: Enforcement, moral emotions, dignity, trust, and deception. Proceedings of the IEEE 100(3), 571–589 (2012)
Coradeschi, S., Ishiguro, H., Asada, M., Shapiro, S., Thielscher, M., Breazeal, C., Mataric, M., Ishida, H.: Human-inspired robots. IEEE Intelligent Systems 21(4), 74–85 (2006)
Fessler, D.M.T., Holbrook, C., Snyder, J.K.: Weapons make the man (larger): Formidability is represented as size and strength in humans. PLoS ONE 7(4), e32751 (2012)
Sims, V.K., Chin, M.G., Sushi, D.J., Barber, D.J., Ballion, J., Clark, B.R., Garfield, K.A., Dolezal, M.J., Shumaker, R., Finkelstein, N.: Anthropomorphism of robotic form: A response to affordances? Proceedings of the Human Factors and Ergonomics Society Annual Meeting 49, 602–605 (2005)
Mutlu, B.: Designing embodied cues for dialogue with robots. AI Magazine, 17–30 (2011)
Parasuraman, R., Miller, C.: Trust and etiquette in high criticality automated systems. Communications of the ACM 47, 51–55 (2004)
Zecca, M., Mizoguchi, Y., Endo, I.F., Kawabata, Y., Endo, N., Itoh, K., Takanishi, A.: Whole body emotion expressions for KOBIAN Humanoid Robot: Preliminary experiments with different emotional expression patterns. Paper presented at the 18th IEEE International Symposium on Robot and Human Interactive Communication. Toyama, Japan (2009)
de Melo, C., Carnevale, P., Read, S., Gratch, J.: Reverse appraisal: The importance of appraisals for the effect of emotion displays on people’s decision-making in a social dilemma. In: Proceedings of the 34th Annual Meeting of the Cognitive Science Society, Sapporo, Japan (2012)
Geiselman, E.E., Johnson, C.M., Buck, D.R.: Flight deck automation: Invaluable collaborator or insidious enabler? Ergonomics in Design: The Quarterly of Human Factors Applications 21, 22–26 (2013)
Parasuraman, R., Sheridan, T.B., Wickens, C.D.: A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans 30, 286–297 (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Lyons, J.B., Havig, P.R. (2014). Transparency in a Human-Machine Context: Approaches for Fostering Shared Awareness/Intent. In: Shumaker, R., Lackey, S. (eds) Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. VAMR 2014. Lecture Notes in Computer Science, vol 8525. Springer, Cham. https://doi.org/10.1007/978-3-319-07458-0_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-07458-0_18
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-07457-3
Online ISBN: 978-3-319-07458-0
eBook Packages: Computer ScienceComputer Science (R0)