Abstract
With increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.
1 Introduction
Over the last years, we have incorporated an increasing number of smart devices into our everyday lives. Many users employ a smartphone while on the go, a desktop computer in their offices, and, increasingly, an augmented (AR) or virtual reality (VR) head-mounted display (HMD) for immersing themselves into virtual worlds. Clearly, each device class possesses unique properties that makes it suitable for different contexts and activities. Research has not only studied the respective devices individually, but also in combination. The use of homogeneous device combinations or modalities for interaction (e. g., tablets or smartphones) is often referred to as cross-device interaction [3]. Similarly, the simultaneous combination of heterogeneous devices or modalities for interaction (e. g., an AR HMD with a linked tablet) is referred to as hybrid user interfaces [5]. Importantly, hybrid user interfaces are characterized as “complementary […] technologies […] that take advantage of the strong points of each” [5].
Despite potential opportunities, most of our workflows are constrained to a single device: Instead of benefiting from an entire device ecology for a given task in a specific context, we often hesitate to incorporate more devices into our workflow and perform entire tasks on a single device, regardless of its suitability for each subtask [16]. The reasons for this constraint are numerous, as to achieve an effective combination of devices for a particular task many factors must be considered, such as the affordances for interaction provided by each device, the continuity of data, or user representations. Therefore, much research has been dedicated to these multi-device ecologies. For example, the field of cross-device interaction [3] examines different constellations of splitting a task across different devices. Even for the nascent field of mixed reality, the area of hybrid user interfaces [5] argues for the use of HMDs in combination with more traditional devices (e. g., smartphones). So what makes these multi-device ecologies worthwhile? Simply adding more devices can be counterproductive, as it might not fit to users’ workflow or current activity [16]. Indeed, in a successful multi-device ecology each device or interface component possesses complementary characteristics, filling a niche that was not suitably covered before.
Consider, for example, an immersive 3D data visualization in augmented reality [9] (see Figure 1): Using an AR HMD, users can immerse themselves in the virtual world and explore a 3D visualization through egocentric navigation. Although supporting mid-air gestures and voice commands can be useful for some tasks, they can be cumbersome for precise interaction with the data itself, due to limitations in accuracy, physical strain, or lacking directness and precision. By adding a tablet for 2D interaction, we can complement the existing device with familiar touch input, thus allowing for more precise data manipulation conveniently constrained onto a physical 2D plane.
In this work, we focus on the aspect of complementarity in novel user interfaces and introduce the concept of complementary interfaces. In the remainder of this article, we elaborate on the concept of complementary interfaces, provide a set of challenges, and illustrate the opportunities of complementary interfaces with examples from within our own research.
2 Complementary interfaces
Traditional desktop interfaces rely on complementary input devices (e. g., mouse and keyboard) to perform tasks, such as pointing and text input. In contrast, many post-WIMP[1] [23] and ubiquitous computing interfaces [24] such as smartphones and tablets are self-contained, trading complementary peripherals with the convenience of built-in touch interaction and a combined input and output space. However, as task complexity increases, single devices may no longer be sufficient to adequately support users in their workflows. For example, research has shown that alternative input modalities can benefit our interaction (e. g., by improving spatial memory [25] or decreasing cognitive load [28]).
Recent research streams in human-computer interaction, such as cross-device interaction [3], multimodal interaction [22], and hybrid user interfaces [5] can be seen as manifestations of Mark Weiser’s vision of the computer for the 21st century: “specialized elements of hardware and software, connected [...], will be so ubiquitous that no one will notice their presence.” [24]. The technological and methodological advances within the last decades allow researchers to design and evaluate new interaction paradigms beyond the boundaries of a single device and modality, leading to a variety of combinations of interfaces that can be used seamlessly in concert. However, handling multiple devices can increase cognitive load [17], with high transaction costs [7], and users are often not aware of the benefits of including additional devices into their workflow [16].
Based on our own experiences in designing and evaluating multi-device and multi-modal environments, we believe that attributing unique roles, properties, and purposes to each device and modality can lead to a worthwhile combination of interfaces that can overcome the mentioned issues.
We call these meaningful combinations of devices and modalities complementary interfaces: By distributing interaction across devices and modalities, we establish a symbiosis of interfaces, where each component purposefully increases the quality of interaction and further supports users in their current activity. Hence complementary interfaces are an umbrella term that includes combinations of homogeneous (e. g., cross-device interaction) and heterogeneous (e. g., hybrid user interfaces) device classes, but also input (e. g., interaction techniques) and output modalities (e. g., visually or auditory). Importantly, complementary interfaces always feature some degree of heterogeneity in the involved components that complement each other to support the overall system functionality to solve the task at hand. These degrees of heterogeneity may lie in the input or output modality, location (e. g., screen space or input space), or dimensionality of data visualization (e. g., 2D, 3D).
Our notion of complementary interfaces has the potential to serve two purposes: (1) As a design framework, supporting designers in building and composing meaningful complementary interfaces; (2) as an evaluation framework, allowing researchers to study effects of meaningful combinations of complementary interfaces.
While our formal definition of complementary interfaces is still in a formative stage, we are currently exploring aspects of complementarity to better identify and quantify their characteristics.
3 Challenges for complementary interfaces
Based on our own experiences in developing and evaluating complementary interfaces, we identified six initial challenges (C1–C6) for complementary interfaces.
C1 – Loss of context and linking content
How can we maintain the user’s context, spatial memory, and world awareness when switching between devices? Here, a seamless transition between devices can be helpful (cf. [9]), but can be especially hard to establish with heterogeneous devices and differing representations (e. g., 2D and 3D visualizations [8]). We aim to explore techniques for a lossless transfer of context across heterogeneous devices, for example by allowing users to create annotations or place visual markers in a visualization to highlight particular data points, which then persist across different devices. Can these techniques be used to establish a mental connection between semantically identical content (e. g., visualization) yet visually different representations of the content (e. g., 2D visualization on the desktop and 3D visualization in an immersive environment [8])? For example, the field of visual analytics uses techniques such as linking and brushing or multiple coordinated views that provide different views on the same data – we therefore want to investigate whether these techniques can be transferred to a more general use case. This may also facilitate communication in heterogeneous collaboration scenarios [15] by providing shared points of reference [14], regardless of the current visual representation or device.
One important aspect in this context may be the continuity of task-relevant data: While each device in the ecology has a distinct complementary purpose, we can redundantly provide task-relevant data on each device to help users in keeping and re-establishing context when switching between devices.
C2 – Cost of switching
Switching our visual attention between different devices can incur a significant overhead [18]. This effect may be especially pronounced for mixed reality devices, as the act of switching between, for example, a desktop screen to a VR HMD [8] is still cumbersome, despite increased device ergonomics. Here, we need to explore techniques that aid or eliminate these transitions. For example, instead of putting on a VR HMD to inspect data in 3D and then taking off again to make specific data selections at a desktop PC, a mixed reality HMD might better support this switch by allowing a transition from VR to reality without taking off the goggles through video-see-through technology. However, the high expense, instrumentation effort, and lack of comfort when wearing state-of-the-art HMDs hinder widespread adoption and prolonged use. We therefore aim to investigate the trade-off between less immersive yet more convenient (e. g., handheld AR) and more immersive but less convenient (e. g., VR HMD) XR devices, which could facilitate transitions between environments.
C3 – Attention awareness and adaptation
How can a proactive and contextual approach based on a combination of implicit interaction [19] and explicit input simplify the user interaction to enable natural interaction? Devices in an environment should tune their attention to the user and adapt to the users’ needs and proficiency [11]. What a user is visually focusing on (e. g., gaze direction, location, orientation) and what skills or knowledge a user has (e. g., detected through cognitive load and arousal measures) should be used to adapt the content and interaction mechanisms and with this, complement explicit user input. Examples are displays that automatically select the language the user is familiar with [12], a reading interface that adapts the presentation speed to the user’s cognitive load [13], or even mutual adaptation scenarios [1]. However, capturing this data reliably, easily, and cheaply still poses a significant technical challenge. Our aim is to further investigate both low-cost hardware solutions as well as interface adaptions to reliably and effectively complement explicit actions based on the user’s (implicit) attention.
C4 – Consistent user experience
How can we provide a consistent user experience across heterogeneous devices while exploiting the strengths of each device? For example, while the desktop profits from the familiarity and precision of a WIMP interface, a VR environment is more suited for 3D user interfaces [8]. However, this can lead to inconsistent interaction, which may result in an increased mental demand for the user. In contrast, reconstructing the interface for each device (e. g., emulating a desktop interface in VR by using 2D panels and pointing with VR controllers) may increase overall consistency, but can also lead to an inferior user experience. Similarly, exploring a VR scenario through a hand-held touchscreen device will necessarily involve different navigation and manipulation techniques compared to using an immersive HMD setup [15]. We aim to explore the impact of interface consistency and ways to gradually adapt it, for example by recreating a 2D desktop interface in VR initially and then gradually morphing this to a 3D interface, or enabling the user to trigger this transformation themselves.
C5 – Continuity of user representation
How can we consistently and continuously represent users across heterogeneous devices and different realities? For example, a desktop may present the user as a mouse cursor, while a VR environment may show an avatar as user representation [20]. Providing a continuous user representation may be essential particularly in multi-user scenarios, to help collaborators understand where other users are located, where their focus lies and what interface (i. e., device) they are interacting through, as this will impact their abilities and behavior [15]. We aim to further investigate how we can support a continuous user representation when transitioning across different devices (e. g., across different tablets [15]) and realities (e. g., from reality to VR [8]) as well as their impact on user performance.
C6 – Overcoming legacy bias and finding suitable modalities
How can we motivate users to integrate multiple interactive components into their workflows? Although there may be clear advantages for engaging with multiple devices or modalities (e. g., [25], [27], [28]), users will often still prefer to work as they are accustomed to it. To overcome this legacy bias [16], complementary interfaces must integrate well with users’ current workflows, devices, and modalities as they shape the way we interact. Here, we aim to investigate how we can best improve upon existing workflows by providing auxiliary complementary interfaces and carefully guide users to benefit from each involved component [27]. As novel technologies (e. g., mixed reality HMDs) become more commonplace, this too will help to reduce legacy bias, as users may be more willing to employ familiar devices.
![Figure 1
STREAM combines spatially-aware tablets with augmented reality head-mounted displays for visual data analysis. Users can interact with 3D visualizations through a multimodal interaction concept, allowing for fluid interaction with the visualizations. [9].](/document/doi/10.1515/itit-2022-0031/asset/graphic/j_itit-2022-0031_fig_001.jpg)
STREAM combines spatially-aware tablets with augmented reality head-mounted displays for visual data analysis. Users can interact with 3D visualizations through a multimodal interaction concept, allowing for fluid interaction with the visualizations. [9].
4 Examples of complementary interfaces
To further demonstrate the value of complementary interfaces we take a look at examples from our work, showcasing complementary interfaces that individual user can use with heterogeneous devices synchronously (Section 4.1) and asynchronously (Section 4.2). Additionally, we describe our work on collaborative complementary interfaces using homogeneous (Section 4.3) and heterogeneous (Section 4.4) device classes. Section 4.5 shows how a meaningful combination of implicit and explicit interaction techniques can lead to proficiency-aware interaction. Finally, Section 4.6 describes how complementary modalities can enrich and facilitate the perception of information.
4.1 STREAM: Synchronous use of heterogenous devices
STREAM [9] combines an immersive AR HMD with a spatially-aware tablet to interact with a 3D visualization (see Figure 1). Here, the two heterogeneous device classes excel at complementary aspects: The AR HMD excels at viewing and interacting with the visualizations in a 3D space, as it provides users with stereoscopic vision and allows for egocentric movement, further reinforcing the depth perception. On the other hand, the tablet provides familiar touch input with haptic feedback, allowing for direct interaction [2] with the 2D scatter plots. Through spatial awareness (i. e., the tablet is tracked with two HTC Vive Trackers), STREAM also enables spatial input: For example, users can rotate individual scatter plots in 3D space by physically rotating the tablet. Users can use this device combination simultaneously, as the AR HMD does not block the user’s hand or view; even when the tablet is out of the user’s view, the familiar touch interaction as well as spatial awareness are still available for the user through the use of an eyes-free interaction concept. Due to the low cost of switching between devices (i. e., users only need to shift their visual attention), the devices are co-dependent – meaning that STREAM cannot be controlled by one device alone.
STREAM addresses the loss of context when switching between AR and tablet visualization (C1) by providing a seamless transition interaction, thus reducing mental demand by merging both the tablet and AR visualization. However, due to the co-dependency between both devices, STREAM relies on a low cost of switching between devices (C3). This is further supported by an eyes-free interaction concept, allowing for interaction with the tablet without requiring the user’s visual attention: Each corner of the tablet contains a large button that is mapped to a single action. This is indicated to the user through an AR heads-up display, facilitating the execution of actions by touching the corresponding corner while relying on proprioception. However, we observed a legacy bias (C6) during periods of eyes-free interaction. Here, users occasionally looked down at the tablet during touch interaction, indicating that they are still used to focus on one device at a time.
![Figure 2
The ReLive mixed-immersion tool combines an immersive analytics virtual reality view (left) with a synchronized non-immersive visual analytics desktop view (right) for analyzing mixed reality studies. The virtual reality view allows users to relive and analyze prior studies in-situ, while the desktop facilitates an ex-situ analysis of aggregated data. [8].](/document/doi/10.1515/itit-2022-0031/asset/graphic/j_itit-2022-0031_fig_002.jpg)
The ReLive mixed-immersion tool combines an immersive analytics virtual reality view (left) with a synchronized non-immersive visual analytics desktop view (right) for analyzing mixed reality studies. The virtual reality view allows users to relive and analyze prior studies in-situ, while the desktop facilitates an ex-situ analysis of aggregated data. [8].
4.2 ReLive: Asynchronous use of heterogenous devices
ReLive [8] bridges the gap between visual analytics approaches on the desktop and immersive analytics approaches in mixed reality by providing a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies (see Figure 2). ReLive combines two heterogeneous device classes – used asynchronously [10] – for complementary analysis workflows. On the one hand, the desktop interface allows for an ex-situ analysis, as the devices excel at precise controls, provide a high-resolution display, and use familiar 2D visualizations suited for viewing aggregated data. In addition, users benefit from a cross-compatible environment as well as malleable components, allowing users to use the keyboard for programming their own components (cf. computational notebooks). On the other hand, a VR HMD complements the desktop view for in-situ analysis. Here, the VR view allows users to immerse themselves in the study and look at the data within its original environmental context. The immersion, egocentric navigation, and stereoscopic vision make this environment ideal for viewing and exploring 3D data. However, since users cannot use both devices at the same time, ReLive has no dependency between both devices. As a result, users can complete the entire task on either device, reducing the amount that users need to switch between devices.
Due to its device combination, ReLive exhibits a significant cost of switching between devices (C3), which is mitigated by making both components independent, yet synchronized. However, despite this synchronization, users still experienced a loss of context in terms of spatial memory when switching between environments (C1). In addition, while visualizations are implicitly synchronized between desktop and VR, we aim to further investigate more explicitly linking content (C1), for example by investigating cross-reality linking and brushing techniques. Lastly, ReLive makes a design tradeoff that employs 2D menu interaction in VR instead of more embodied interaction to guarantee for a consistent user experience across devices (C4).
![Figure 3
Our collaborative sensemaking environment allows users to individually work with their personal tablet and share their knowledge on a shared interactive tabletop (A). To investigate the meaningfulness of this complementary interface, we studied the influence of the size of the shared space (B–D) [26].](/document/doi/10.1515/itit-2022-0031/asset/graphic/j_itit-2022-0031_fig_003.jpg)
Our collaborative sensemaking environment allows users to individually work with their personal tablet and share their knowledge on a shared interactive tabletop (A). To investigate the meaningfulness of this complementary interface, we studied the influence of the size of the shared space (B–D) [26].
4.3 When tablets meet tabletops: Collaboration with homogeneous devices
![Figure 4
Supporting heterogeneous cross-device collaboration in VR through a handheld device and a head-mounted display. (A) Handheld user can create and highlight building blocks through their touchscreen interface, to guide the onlooking HMD user (orange humanoid avatar). (B) HMD users can manipulate objects (e. g., scale a cylinder) directly with their hands [15].](/document/doi/10.1515/itit-2022-0031/asset/graphic/j_itit-2022-0031_fig_004.jpg)
Supporting heterogeneous cross-device collaboration in VR through a handheld device and a head-mounted display. (A) Handheld user can create and highlight building blocks through their touchscreen interface, to guide the onlooking HMD user (orange humanoid avatar). (B) HMD users can manipulate objects (e. g., scale a cylinder) directly with their hands [15].
In this work [26], we studied collaborative sensemaking activities using personal tablets and a shared tabletop. Here, we combined two homogeneous devices for complementary collaborative sensemaking activities (see Figure 3). The tablets act as private space, where each user can search, read, and annotate documents independent of their partner’s activity, facilitating loosely coupled activities. As collaborative sensemaking activities can be described as mixed-focus collaboration, where individuals constantly transition between individual and shared activities (i. e., coupling styles), we purposefully added a shared devices for collaborative activities: Here, users can share their gained information, spatially arrange it, and use it as a starting point to discuss solution approaches with each other. To this end, we closely investigated the effect of the size of a shared tabletop on user’s interaction, their communication, and awareness during cross-device mixed-focus collaboration. However, collaboration is just encouraged, but not enforced. Potentially, each user can solve the task on their own, with minimal or no usage of the shared space at all. This is further supported by the lack of dependency of the incorporated devices – it is only necessary to read documents on the individual tablet, the shared tabletop can be regarded as optional.
We addressed multiple challenges with our multi-device sensemaking tool: To support collaborative sensemaking activities, the two components of the system (personal tablets and shared space) needed to be seamlessly connected. Here, we carefully designed the interfaces to reduce the cost of switching (C3) between them. While it was possible to transfer content bidirectional between the personal and shared space, it was also possible to directly send a document to the partner’s tablet. We used color-codings to indicate each participant’s activities, which facilitated linking content (C1). Further, the spatial arrangement on the tabletop was visually highlighted by drawing convex hulls around the clustered items. This was possible by encircling the items or by lifting them in or out. Color-coded bookmarks further supported the continuity of user representation (C5).
4.4 Cross-device collaboration in VR: Collaboration with heterogeneous devices
This work explores the heterogeneous cross-device collaboration between a handheld VR device (i. e., a window into a virtual world) and a fully-immersed VR HMD [15]. The HMD user is embodied by a human-sized avatar (see Figure 4 (A)), reflecting their ability to move via natural locomotion and manipulate objects with their virtual hands. In contrast, the handheld user is represented as a floating, box-shaped head (see Figure 4 (B)) to allow collaborators to discern the user’s direction of gaze. This enables interaction with different levels of immersion, hardware availability, and mobility, supporting scenarios where not all users have access to a full VR setup. In this work, the complementarity stems from the different roles given to the HMD and handheld user, which are based on their specific device characteristics: The HMD user is responsible for 3D object manipulation, as this benefits especially well from 3D spatial input. In contrast, the handheld user can act as consultant, as this user can easily access real-world artifacts (e. g., blueprints) and easily switch between egocentric navigation and assuming the HMD user’s point of view.
This scenario allows us to reflect on two challenges. First, it highlights the continuity of user representation (C5) for collaboration. Due to asymmetric devices (i. e., handheld and HMD), different modes of non-verbal communication must be considered. Both users are displayed as an avatar regardless of the device, allowing for a consistent representation between users. Second, to favor each device’s strength, there is a tradeoff regarding the consistency of user experience (C4), which may lead to additional confounding factors and may complicate communication (e. g., sharing interaction hints).
4.5 Proficiency-aware interfaces: Combining implicit and explicit interaction techniques
In our approach, we explored how a system can become aware of the user’s language proficiency. The display can provide content in different languages. Using gaze tracking the viewing and reading pattern of the user is captured and analyzed. Based on this implicit input, an appropriate content representation is chosen [12]. This system is an example of a proficiency-aware user interface [11] based on gaze. This approach can be extended beyond languages (e. g., observing gaze patterns on manual tasks or while playing music) and also to other physiological signals such as EEG. Here we consider the complementarity of content, its presentation, and with this, the complementarity of implicit and explicit interactions. Instead of having different options available to the user and requiring a manual switch, we created a system to do this implicitly. This adaptation is a basis for creating the experience of natural interaction of a system that offers an interaction that is tailored to the user and feels appropriate, without explicitly selecting.
This combination of implicit and explicit interactions highlights the importance of attention awareness and adaptation (C3). This does not only include that the interactive system provides different contents that can be adapted and visualized based on implicit user input, but also the gathering of user information (e. g., gaze movements) in an unobtrusive way.
4.6 Multimodal interfaces: Input and output
In the applications above, we showed different examples of interfaces in which information was presented mainly visually. However, in some cases we can enhance the representation of data by incorporating other sensory channels, such as audition or touch. A multimodal approach to data representation is especially advantageous when we need to specify several dimensions associated with the data, (e. g., a quantitative measure and the uncertainty associated with it [6]). Yet, by only presenting complex pieces of data visually, we run the risk of overloading the representation, which may negatively impact the user’s ability to derive a meaningful interpretation of the information. Leveraging multiple sensory modalities for different data dimensions instead allows us to isolate and focus on specific aspects of the data while also being able to maintain an awareness of overall informational coherence. Since different aspects of data are presented via separate perceptual channels, multimodal interfaces are intrinsically complementary. Users can attend simultaneously to various fields of data without needing to switch focus between devices or visualization windows (C2). Depending on the given application, devices dedicated to each sensory modality can be combined to meet specific representational requirements. For a multimodal representation to effectively convey the desired information, designers must take into account the underlying characteristics of the sensory modalities targeted by the interface (C6). Mapping spatio-temporal properties to a specific sensory channel can provide a more intuitive, straightforward approach to conveying information. For example, given the high spatial resolution of the visual system compared to the other channels, a visual representation is the most appropriate for presenting spatially organized data. In contrast, the auditory system is the least suitable candidate for mapping such spatial information, since its resolution is limited in this domain. Yet, data sonification can take advantage of the auditory system’s much higher temporal resolution [21], for example by representing properties that quickly change over time using modulations in pitch or volume. Moreover, the assignment of a certain dimension to a feature of multisensory representation is often and necessarily arbitrary. While some arbitrary mappings are widely used to the point that they have become conventional in visual representations (e. g., high and low numerical values are typically represented with warm and cold colors, respectively), novel correspondences may require users to learn the correlations before being able to use the application [4]. As a result, designers must ensure that users can effectively interpret the presented information with the given mappings.
5 Outlook
Complementarity may play an essential role in the design of novel user interfaces, resulting in interactions involving several different technologies. In this paper, we discuss how such meaningful combinations of devices and modalities – forming a symbiosis of interfaces – contribute towards an increased quality of interaction. We introduce the term complementary interfaces to describe these meaningful combinations, highlighting the complementary roles of each component by taking “advantage of the strong points of each” [5]. Our notion of complementary interfaces can be either used as a design framework (e. g., supporting identification of meaningful combinations) or an evaluation framework (e. g., explaining and quantifying effects of meaningful combinations). In future work, we plan to further elaborate and formalize our notion of complementary interfaces, define a design space, and further address our presented challenges. Ultimately, we aim to quantify the meaningfulness of the symbiosis of interfaces by investigating and establishing metrics to, for example, quantify redundancy and complementarity of input and output modalities in multi-device ecologies.
Funding source: Deutsche Forschungsgemeinschaft
Award Identifier / Grant number: 251654672
Funding statement: This research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 251654672 – TRR 161.
About the authors
Johannes Zagermann is a research assistant in the Human-Computer Interaction Group at the University of Konstanz. He studied Business Information Systems at the Hochschule Furtwangen University (BSc, 2010) and Information Engineering at the University of Konstanz and Linköping University (MSc, 2015). His research interests span the areas of cross-device interaction, multi-modal interaction, and hybrid user interfaces.
Sebastian Hubenschmid is a research assistant in the Human-Computer Interaction Group at the University of Konstanz. He studied Information Engineering and Computer Science at the University of Konstanz (BSc, 2017; MSc, 2019). In his research, he focuses on the use of transitional and hybrid user interfaces for immersive analytics.
Dr. Priscilla Balestrucci is postdoctoral researcher at Ulm University. She studied neuroscience (PhD, 2017) and medical engineering (MSc, 2013; BSc, 2011) at the University of Rome “Tor Vergata”. Her research focuses on how information from multiple sensory systems and prior knowledge affect sensorimotor control.
TT.-Prof. Dr. Tiare Feuchtner is Assistant Professor at the Department of Computer Science of the University of Konstanz since 2021. She studied Computer Science at Aarhus University (PhD, 2018), TU Berlin (MSc, 2015) and TU Wien (BSc, 2011), was visiting researcher at the EventLab in Barcelona, and junior researcher at the Telekom Innovation Laboratories (T-Labs) in Berlin and the Austrian Institute of Technology (AIT) in Vienna.
Prof. Dr. Sven Mayer is an Assistant Professor of HCI at LMU Munich. In his research, he uses machine learning tools to design, build, and evaluate future human-centered interfaces. He focuses on hand- and body-aware interactions in contexts such as large displays, augmented and virtual reality, and mobile scenarios.
Prof. Dr. Marc O. Ernst is Chair of the Department of Applied Cognitive Psychology at Ulm University since 2016. He studied physics in Heidelberg and Frankfurt/Main and received his PhD in 2000 from Tübingen University for his work on human visuomotor behavior at the Max Planck Institute for Biological Cybernetics. He was visiting researcher at UC Berkeley (2000–2001), research scientist and group leader at the MPI in Tübingen (2001–2010), and professor at Bielefeld University (2011–2016).
Prof. Dr. Albrecht Schmidt received the Ph. D. degree in computer science from Lancaster University, in 2002. He is currently a Professor with LMU Munich and a Chair for Human-Centered Ubiquitous Media, Department of Informatics. His research encompasses the development and evaluation of augmented and virtual reality applications to enhance the user’s life quality through technology. This includes the amplification of human cognition and physiology through novel technology.
Prof. Dr. Harald Reiterer received the Ph. D. degree in computer science from the University of Vienna, in 1991. He is a Professor with University of Konstanz and a Chair for Human-Computer Interaction, Department of Computer Science. In 1995 the University of Vienna conferred him the venia legendi (Habilitation) in Human-Computer Interaction. From 1990–1995 he was visiting researcher at the GMD in St. Augustin/Bonn (now Fraunhofer Institute for Applied Information Technology), and from 1995–1997 Assistant Professor at the University of Vienna. His research interests include different fields of Human-Computer Interaction, like Interaction Design, Usability Engineering, and Information Visualization.
-
Author contributions: Johannes Zagermann and Sebastian Hubenschmid contributed equally to this research.
References
1. Priscilla Balestrucci and Marc O Ernst. Visuo-motor adaptation during interaction with a user-adaptive system. Journal of Vision, 19:187a, 2019.10.1167/19.10.187aSearch in Google Scholar
2. Stefan Bruckner, Tobias Isenberg, Timo Ropinski, and Alexander Wiebel. A Model of Spatial Directness in Interactive Visualization. IEEE Transactions on Visualization and Computer Graphics, 25(8):2514–2528, 2019.10.1109/TVCG.2018.2848906Search in Google Scholar PubMed
3. Frederik Brudy, Christian Holz, Roman Rädle, Chi-Jui Wu, Steven Houben, Clemens N Klokmose, and Nicolai Marquardt. Cross-Device Taxonomy: Survey, Opportunities and Challenges of Interactions Spanning Across Multiple Devices. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems – CHI ’19, pages 1–28, Glasgow, Scotland Uk, 2019. ACM.10.1145/3290605.3300792Search in Google Scholar
4. Marc O Ernst. Learning to integrate arbitrary signals from vision and touch. Journal of vision, 7(5):7, 2007.10.1167/7.5.7Search in Google Scholar PubMed
5. Steven Feiner and Ari Shamash. Hybrid user interfaces: Breeding virtually bigger interfaces for physically smaller computers. In Proceedings of the 4th Annual ACM Symposium on User Interface Software and Technology, UIST’91, pages 9–17, Hilton Head, South Carolina, USA, November 1991. ACM.10.1145/120782.120783Search in Google Scholar
6. Jochen Görtler, Christoph Schulz, Daniel Weiskopf, and Oliver Deussen. Bubble treemaps for uncertainty visualization. IEEE transactions on visualization and computer graphics, 24(1):719–728, 2017.10.1109/TVCG.2017.2743959Search in Google Scholar PubMed
7. Steven Houben, Nicolai Marquardt, Jo Vermeulen, Clemens N Klokmose, Johannes Schöning, Harald Reiterer, and Christian Holz. Opportunities and challenges for cross-device interactions in the wild. interactions, 24(5):58–63, August 2017.10.1145/3121348Search in Google Scholar
8. Sebastian Hubenschmid, Jonathan Wieland, Daniel Immanuel Fink, Andrea Batch, Johannes Zagermann, Niklas Elmqvist, and Harald Reiterer. ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies. In CHI Conference on Human Factors in Computing Systems, pages 1–20, New Orleans LA USA, April 2022. ACM.10.1145/3491102.3517550Search in Google Scholar
9. Sebastian Hubenschmid, Johannes Zagermann, Simon Butscher, and Harald Reiterer. STREAM: Exploring the Combination of Spatially-Aware Tablets with Augmented Reality Head-Mounted Displays for Immersive Analytics. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21, pages 1–14, New York, NY, USA, May 2021. ACM.10.1145/3411764.3445298Search in Google Scholar
10. Sebastian Hubenschmid, Johannes Zagermann, Daniel Fink, Jonathan Wieland, Tiare Feuchtner, and Harald Reiterer. Towards asynchronous hybrid user interfaces for cross-reality interaction. In ISS’21 Workshop Proceedings: “Transitional Interfaces in Mixed and Cross-Reality: A new frontier?”, 2021.Search in Google Scholar
11. Jakob Karolus and Albrecht Schmidt. Proficiency-aware systems: Adapting to the user’s skills and expertise. In Proceedings of the 7th ACM International Symposium on Pervasive Displays, pages 1–2, 2018.10.1145/3205873.3210708Search in Google Scholar
12. Jakob Karolus, Paweł W Wozniak, Lewis L Chuang, and Albrecht Schmidt. Robust gaze features for enabling language proficiency awareness. In Proceedings of the 2017 CHI conference on human factors in computing systems, pages 2998–3010, 2017.10.1145/3025453.3025601Search in Google Scholar
13. Thomas Kosch, Albrecht Schmidt, Simon Thanheiser, and Lewis L Chuang. One does not simply rsvp: mental workload to select speed reading parameters using electroencephalography. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–13, 2020.10.1145/3313831.3376766Search in Google Scholar
14. Jens Müller, Roman Rädle, and Harald Reiterer. Remote collaboration with mixed reality displays: How shared virtual landmarks facilitate spatial referencing. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17, pages 6481–6486, New York, NY, USA, 2017. ACM.10.1145/3025453.3025717Search in Google Scholar
15. Patrick Aggergaard Olin, Ahmad Mohammad Issa, Tiare Feuchtner, and Kaj Grønbæk. Designing for Heterogeneous Cross-Device Collaboration and Social Interaction in Virtual Reality. In 32nd Australian Conference on Human-Computer Interaction, pages 112–127, Sydney NSW Australia, December 2020. ACM.10.1145/3441000.3441070Search in Google Scholar
16. Thomas Plank, Hans-Christian Jetter, Roman Rädle, Clemens N Klokmose, Thomas Luger, and Harald Reiterer. Is Two Enough?! Studying Benefits, Barriers, and Biases of Multi-Tablet Use for Collaborative Visualization. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems – CHI ’17, pages 4548–4560, New York, New York, USA, 2017. ACM.10.1145/3025453.3025537Search in Google Scholar
17. Roman Rädle, Hans-Christian Jetter, Mario Schreiner, Zhihao Lu, Harald Reiterer, and Yvonne Rogers. Spatially-aware or spatially-agnostic? elicitation and evaluation of user-defined cross-device interactions. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, pages 3913–3922, New York, NY, USA, 2015. ACM.10.1145/2702123.2702287Search in Google Scholar
18. Umar Rashid, Miguel A. Nacenta, and Aaron Quigley. Factors influencing visual attention switch in multi-display user interfaces: A survey. In Proceedings of the 2012 International Symposium on Pervasive Displays – PerDis ’12, pages 1–6, Porto, Portugal, 2012. ACM.10.1145/2307798.2307799Search in Google Scholar
19. Albrecht Schmidt. Implicit human computer interaction through context. Personal technologies, 4(2):191–199, 2000.10.1007/BF01324126Search in Google Scholar
20. Sofia Seinfeld, Tiare Feuchtner, Antonella Maselli, and Jörg Müller. User representations in human-computer interaction. Human–Computer Interaction, 36(5–6):400–438, 2021.10.1080/07370024.2020.1724790Search in Google Scholar
21. Irene Senna, Cesare V Parise, and Marc O Ernst. Modulation frequency as a cue for auditory speed perception. Proceedings of the Royal Society B: Biological Sciences, 284(1858):20170673, 2017.10.1098/rspb.2017.0673Search in Google Scholar PubMed PubMed Central
22. Matthew Turk. Multimodal Interaction: A Review. Pattern Recognition Letters, 36(1):189–195, 2014.10.1016/j.patrec.2013.07.003Search in Google Scholar
23. Andries van Dam. Post-wimp user interfaces. Commun. ACM, 40(2):63–67, feb 1997.10.1145/253671.253708Search in Google Scholar
24. Mark Weiser. The computer for the 21 st century. Scientific american, 265(3):94–105, 1991.10.1145/329124.329126Search in Google Scholar
25. Johannes Zagermann, Ulrike Pfeil, Daniel Fink, Philipp von Bauer, and Harald Reiterer. Memory in Motion: The Influence of Gesture- and Touch-Based Input Modalities on Spatial Memory. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pages 1899–1910, Denver Colorado USA, May 2017. ACM.10.1145/3025453.3026001Search in Google Scholar
26. Johannes Zagermann, Ulrike Pfeil, Roman Rädle, Hans-Christian Jetter, Clemens Klokmose, and Harald Reiterer. When Tablets meet Tabletops: The Effect of Tabletop Size on Around-the-Table Collaboration with Personal Tablets. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, pages 5470–5481, San Jose California USA, May 2016. ACM.10.1145/2858036.2858224Search in Google Scholar
27. Johannes Zagermann, Ulrike Pfeil, Philipp von Bauer, Daniel Fink, and Harald Reiterer. “it’s in my other hand” – studying the interplay of interaction techniques and multi-tablet activities. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–13, New York, NY, USA, 2020. ACM.10.1145/3313831.3376540Search in Google Scholar
28. Chris Zimmerer, Philipp Krop, Martin Fischbach, and Marc Erich Latoschik. Reducing the Cognitive Load of Playing a Digital Tabletop Game with a Multimodal Interface. In CHI Conference on Human Factors in Computing Systems, pages 1–13, New Orleans LA USA, 2022. ACM.10.1145/3491102.3502062Search in Google Scholar
© 2022 the author(s), published by De Gruyter
This work is licensed under the Creative Commons Attribution 4.0 International License.