Abstract
Chen et al. (2014) proposed the situation awareness-based agent transparency (SAT) model, which is a framework for improving human situation awareness and understanding of autonomous agents’ actions, intentions, goals and reasoning. Research using the SAT model as a framework has traditionally focused on displaying transparency concept information in the visual modality. Presenting information in the visual modality exclusively can increase human operators’ cognitive load. Multiple Resource Theory suggests that offloading information to other modalities can reduce cognitive load (Wickens 2002). One such modality that can potentially reduce workload is the tactile modality. Tactile displays, which use somatosensory stimulation, have been found to be useful in improving navigation performance with spatial information and providing alerts. One of the current Army Modernization Priorities is the development of Next Generation Combat Vehicles (NGCV), which conceptually includes both manned and unmanned vehicles. Here we present our work in implementing tactile displays to enhance crew situation awareness and improve the understanding of agents in this simulated environment. The operator, through a tactile belt, will be provided spatial information for navigation as well as information for notifications and alerts about the agent’s status and actions. We hypothesize that the integration of tactile displays in the vehicle will improve crew situation awareness and their understanding of the agents for effective interaction and tasks performance.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
In a typical and everyday driving task, a wealth of information is funneled to the driver’s visual and auditory sensory channels. The driver is able to safely maneuver the car using the visual channel to steer, monitor the system for alerts, observe hazards (e.g., lane departures, objects in the road, pedestrians, unsafe driving distances) and observe road signs. The auditory channel of the driver can receive information which aids with safety such as horns, sirens, road noise (e.g. rumble strips), and vehicle noise. In addition, many drivers utilize the auditory channel for conversations and listening to the radio. Similarly, in military environments, Soldiers also receive a wealth of information via the visual and auditory channels. In addition to the driving task, hazard avoidance, and noises from the environment, they receive visual and auditory communications about their environment. With such an awesome amount of visual and auditory stimuli, critical, mission-related information can be easily missed. So in accordance with Multiple Resource Theory, information can be offloaded to another sensory channel to reduce cognitive workload (Wickens 2002). Such a channel must be salient (adequate signal to noise ratio) enough to draw the driver’s attention. Research has indicated that tactile stimuli can be used to provide critical information to Soldiers in vehicles (Carlander and Eriksson 2006; Krausmann and White 2008).
Research and application of tactile displays in vehicles have been explored for many decades. One area of interest is the use of tactile displays for collision avoidance systems (e.g. forward collision warning, lane departure warning, lane change/merge warning and blind spot detection). Such a system employs tactors integrated into the seat, accelerator pedal and belt. Research on collision avoidance shows significant faster braking response and larger safety margins when tactile warning systems are used (Ho et al. 2006). When comparing tactile warnings to visual and auditory warnings, Scott and Gray (Scott and Gray 2008) also found significantly shorter response times. In another investigation, tactile warnings resulted in better localization of crash threats (Fitch et al. 2007). While the above research using simulated driving environment did not particularly address tactile signal saliency, the ability to detect tactile signal could be an issue in a real world driving with moving vehicle and road noises. As a result, Krausmann and White (Krausmann and White 2008) addressed this specific issue and showed that participants were able to detect tactile signals while on a ride motion simulator platform that simulates a Bradley Fighting Vehicle and High Mobility Multipurpose Wheeled Vehicle traversing a cross-country course or gravel road.
The development of Next Generation Combat Vehicles (NGCV) is one of the top modernization priorities of the Army. In support of this, Combat and Capabilities Development Command (CCDC) Army Research Laboratory (ARL) has created the Information for Mixed Squads (INFORMS) laboratory to study crew-agent interactions in a simulated environment. In this environment, up to 14 participants can control at least 6 manned and unmanned robotic vehicles. One of the roles of crew members is to manage robotic agents that aid in mission execution. The simulation environment allows experimenters to expose these manned vehicles and robotic agents to mobility hazards, traditional kinetic threats and electromagnetic spectrum threats. Crew members require the ability to be able to maintain situation awareness of the current status of robotic agents as well as potential hazards, in order to complete their missions. With such critical information, crew members are able to make decisions on assisting the robotic agents and guiding them to safety. The situation awareness-based agent transparency (SAT) model is a framework to improve the crew’s situation awareness and understanding of the robotic agents (Chen et al. 2014). Prior research (Stowers et al. 2016) using SAT model-inspired interfaces has generally presented transparency information using visual displays. Because the visual channel is used heavily in vehicle crew stations, managing robotic agents based on information that is primarily visual can be problematic as the crew is easily overloaded by unimodal stimulation.
The Army has long been interested in using tactile displays to reduce cognitive workload and provide Soldiers the adequate situation awareness required to successfully execute their mission. Past ARL research have shown that tactile communication and multimodal displays are effective in land navigation and reducing cognitive workload for dismounted Soldiers (Coovert et al. 2007; Elliott et al. 2007, 2010; Pettitt et al. 2006; White 2010). In addition, tactile cueing was found to benefit operators performing military and robotics task in multi-tasking environment (Chen and Terrence 2008, 2009). Based on the the aforementioned research studies, it is hypothesized that tactile displays will effectively provide information that will improve crew situation awareness and understanding of robotic agents.
2 Methods
2.1 The INFORMS Laboratory
The INFORMS laboratory is one of the new CCDC ARL funded research facilities designed to house development of concepts and prototyping of the NGCV Warfighter Machine Interface (WMI). The objective is to be able to rapidly bring research concepts and best practices to our transition partners to improve the development of vehicle interfaces. Within the INFORMS laboratory, the WMI Manned-Unmanned Experimentation Laboratory, Simulation in the Loop (MEL-SIL) is being used as a testbed for research in human-autonomy teaming (see Fig. 1). The MEL-SIL is comprised of two 7-person NGCV crew station mockups. This setup allows ARL researchers to translate their relevant research into demonstrable products to showcase novel human-machine interfaces, interactions and teaming capabilities. We used the MEL-SIL of the INFORMS laboratory as the platform and experimentation setup to integrate tactile displays to enhance crew situation awareness and understanding of different components of the agent’s actions and intents.
2.2 Tactile System
For tactile cuing, we integrated a system developed by Engineering Acoustics, Inc. (EAI) that consists of a tactile belt and a control unit. The belt, which is worn about the torso, contains 8 EAI-C2 tactors and is driven by the control unit (Fig. 2). The C-2 tactor is designed with a primary resonance in the 200–300 Hz range that coincides with peak sensitivity of the Pacinian corpuscle, the skin’s mechanoreceptors that sense vibration. This 8-tactor torso arrangement was initially designed to provide spatial information for navigation purposes. The 8 tactors are arranged in the belt at 45\(^\circ \) intervals, which can be associated with cardinal compass directions.
The tactile control unit is connected to the computer that is used to run the WMI via a USB interface. The WMI uses on-vehicle simulated sensors to gather information about the environment as well the status of the robotic vehicle (e.g. vehicle location, vehicle status, threat location, etc.). This information will be live streamed in using the Lab Streaming Layer (LSL) to initiate tactile stimuli. LSL is a system for the unified collection of measurement time series in research experiments that handles both the networking, time-synchronization and (near-) real-time data. The information obtained from the WMI will be sent to the tactor control unit via the LSL to activate specific tactile cues and messages.
2.3 Scenario and Tasks
The scenario used in the study involves a simulated reconnaissance mission in which the crew is tasked to operate a semi-autonomous robotic combat vehicle (RCV) through an environment containing a gradient of complexity and urbanization while maintaining situation awareness. Along the RCV’s route, the crew is also asked to mark targets by placing battle space objects (BSO) at the appropriate map location, communicate, and perform other tasks using the WMI. Different areas on the map are marked to indicate potential threats (kinetic threats, such as improvised explosive devices or IEDs, and electromagnetic threats such as signal jammers) (shown in Fig. 3). In addition, the crew is not operating the RCV in isolation. This is a team-based scenario in which other crew members within the team are working in the same environment, thus having the ability to communicate with each other. For instance, if crew member #2 places a BSO in the environment within the proximity of crew member #1, depending on the urgency of the information, an alert will be provided in the form of tactile signal.
In conveying information about the environment and alerts to the crew, we categorize tactile information into spatial and non-spatial. Spatial cues relate to directional information such as locations of the RCV relative to potential threats or information about next way point. Non-spatial information relates to notifications, alerts and warnings indicating when the RCV has encountered a mobility challenge, or when RCV is about to enter a dangerous area. For spatial information, a single tactor is activated in order to convey directional information that corresponds to the location of a target. For information that does not require a directional cue, a combination of tactors will be activated simultaneously or in sequence to form a tactile pattern or message. Non-spatial tactile information has been shown to be effective, intuitive, require little to no training to recognize and memorize the signals if the number of tactile messages are fewer than five (Fitch et al. 2011). Thus instead of designing tactile messages for every possible non-spatial event that could be communicated to the crew, we group the non-spatial information into three levels based on the attention required and the urgency of the information. Non-spatial information level 1 refers to the most critical warnings that require immediate attention (e.g. RCV encountering a severe mobility challenge, entering a different mission-relevant area, or entering close proximity to known threats in the environment); level 2 refers to intermediate level warnings that require crew situation awareness (e.g. RCV entering areas where threats are suspected); level 3 refers to general information that requires attention but is not an imminent threat to the mission or vehicle (e.g. RCV fuel low, incoming messages/communication, etc.). These three non-spatial tactile information will be comprised of vibration patterns that use stimuli of various duration, frequency, amplitude/intensity, inter-pulse interval, and numbers of tactors to be activated. These tactile signal parameters will be designed and crowd-source tested for effectiveness and intuitiveness before their implementation in the actual study.
The table below provides specific examples of the type of messages or cues to be displayed through tactile messages.
Information | Type |
---|---|
Direction | Spatial |
RCV getting stuck | Non-spatial (L1) |
RCV entering suspected IED area | Non-spatial (L2) |
Incoming message | Non-spatial (L3) |
2.4 Evaluation Metrics
Crew situation awareness and understanding of the RCV as well as task performance metrics will be evaluated both qualitatively and quantitatively. For qualitative measures, SAGAT (Endsley 1998) queries will be utilized to determine crew situation awareness and understanding of RCV actions, intentions, goals, and general reasoning during the experiment. SAGAT queries are questions like: is the RVC experimenting mobility problems? Is RCV near a danger zone? Where is the RCV next major turn? where and what is the next threat along the route? Participants will be queried both when completing the task with and without the tactile belt. Questionnaires related to tactile displays usability will also be included at the end of the experiment. Eye tracking data will be used to determine the crew’s ability to maintain 360-degree situation awareness and security over the RCV using its sensors. Crew performance on target marking will be evaluated with quantitative measures of accuracy and response time. Crew performance on RCV mobility will be quantified using the time between arriving at rally points, total route completion time, and response time when RCV requires mobility assistance.
3 Summary
As part of the Army’s modernization efforts, NGCV will be crewed by humans who are responsible for coordinating complex maneuvers between multiple manned and unmanned vehicles. Crew members must have situation awareness and an understanding of the robotic agents in order to successfully execute their mission. The present worked described in this paper is aimed at ensuring that crew members have such information via a tactile belt. Given the abundance of information provided to the visual and auditory channels of crew members, which can induce cognitive overload, the tactile channel is being explored as a potential means of communication. In order to investigate the potential advantages of tactile displays in NGCV, we will collect both quantitative and qualitative data. We hypothesize that the integration of tactile displays will enhance crew members ability to manage robotic agents by improving situation awareness and their understanding of those agents. Findings of this work will be transitioned to our partners within the Army to inform the design of the crew interface in Next Generation Combat Vehicles.
References
Carlander, O., Eriksson, L.: Uni- and biomodal threat cueing with vibrotactile and 3D audio technologies in a combat vehicle. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 50(16), 1552–1556 (2006). https://doi.org/10.1177/154193120605001608
Chen, J.Y., Boyce, M., Procci, K., Wright, J., Garcia, A., Barns, M.: Situation awareness-based agent transparency. ARL-TR-6905 (2014)
Chen, J.Y., Terrence, P.I.: Effects of tactile cueing on concurrent performance of military and robotics tasks in a simulated multi-tasking environment. Ergonomics 51(8), 1137–1152 (2008). https://doi.org/10.1080/00140130802030706
Chen, J.Y., Terrence, P.I.: Effects of imperfect automation and individual di erences on concurrent performance of military and robotics tasks in a simulated multitasking environment. Ergonomics 52(8), 907–920 (2009). https://doi.org/10.1080/00140130802680773
Coovert, M.D., Gray, A.A., Elliott, L.R., Redden, E.S.: Development of a framework for multimodal research: creation of a bibliographic database. ARL-TR-4068 (2007)
Elliott, L.R., Duistermaat, M., Redden, E.S., Van Erp, J.: Multi-modal guidance for land navigation. ARL-TR-4295 (2007)
Elliott, L.R., van Erp, J.B., Redden, E.S., Duistermaat, M.: Field-based validation of a tactile navigation device. IEEE Trans. Haptics 3(2), 78–87 (2010). https://doi.org/10.1109/ToH.2010.3
Endsley, M.R.: Situation awareness global assessment technique (sagat). In: Proceedings of the IEEE 1988 National Aerospace and Electronics Conference, pp. 789–795 (1988)
Fitch, G.M., Hankey, J.M., Kleiner, B.M., Dingus, T.A.: Driver comprehension of multiple haptic seat alerts intended for use in an integrated collision avoidance system. Transp. Res. Part F 14, 278–290 (2011). https://doi.org/10.1016/j.trf.2011.02.001
Fitch, G.M., Kiefer, R.J., Hankey, J.M., Kleiner, B.M.: Toward developing an approach for alerting drivers to the direction of a crash threat. Hum. Factors 49(4), 710–720 (2007). https://doi.org/10.1518/001872007X215782
Ho, C., Reed, N., Spence, C.: Assesing the e ectiveness of “intuitive” vibrotactile warning signals in preventing front-to-rear-end collision in a driving simulator. Accid. Anal. Prev. 38, 988–996 (2006). https://doi.org/10.1016/j.aap.2006.04.002
Krausmann, A.S., White, T.L.: Detection and localization of vibrotactile signals in moving vehicles. ARL-TR-4463 (2008)
Pettitt, R.A., Redden, E.S., Carstens, C.B.: Comparison of army hand and arm signals to a covert tactile communication system in a dynamic environment. ARL-TR-3838 (2006)
Scott, J., Gray, R.: A comparison of tactile, visual and auditory warnings for rear-end collision prevention in simulated driving. Hum. Factors 50(2), 264–275 (2008). https://doi.org/10.1518/001872008X250674
Stowers, K., Kasdaglis, N., Newton, O., Lakhmani, S., Wohleber, R., Chen, J.: Intelligent agent trans- parency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60, 1706–1710 (2016)
White, T.L.: Suitable body locations and vibrotactile cueing types for dismounted soldiers. ARL-TR-5186 (2010)
Wickens, C.D.: Multiple resources and performance prediction. Theor. Issues Ergon. Sci. 3, 159–177 (2002)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply
About this paper
Cite this paper
Chhan, D., White, T.L., Perelman, B.S. (2019). In-Vehicle Tactile Displays to Enhance Crew Situation Awareness and Understanding of Agents in a Simulated Driving Environment. In: Yamamoto, S., Mori, H. (eds) Human Interface and the Management of Information. Information in Intelligent Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11570. Springer, Cham. https://doi.org/10.1007/978-3-030-22649-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-22649-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22648-0
Online ISBN: 978-3-030-22649-7
eBook Packages: Computer ScienceComputer Science (R0)