Abstract
Informative and realistic haptic feedback significantly enhances virtual reality (VR) manipulation. In particular, vibrotactile feedback (VF) can deliver diverse haptic sensations while remaining relatively simple. This has made it a go-to solution for haptics within hand-held controllers and tangible props for VR. However, VF in hand-helds has solely focused on monolithic vibration of the entire hand-held device. Thus, it is not clear to what extent such hand-held devices could support the delivery of spatialized information within the hand. In this paper, we consider a tangible cylindrical handle that allows interaction with virtual objects extending beyond it. This handle is fitted with a pair of vibrotactile actuators with the objective of providing in-hand spatialized cues indicating direction and distance of impacts. We evaluated its capability for rendering spatialized impacts with external virtual objects. Results show that it performs very well for conveying an impact’s direction and moderately well for conveying an impact’s distance to the user.
This project has received funding from the EU Horizon 2020 program, grant agreement No 801413, project “H-Reality”; and from the Inria Défi project “DORNELL”.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction and Related Work
Vibrotactile feedback (VF) is a popular haptic feedback modality for virtual reality (VR) interaction because it combines relatively low technological complexity in its implementation with a wide variety of achievable haptic effects [5]. There is evidence for a positive impact of VF on many success metrics for interactions with virtual environments (VEs), such as improved task performances [2], improved user immersion [2], increased perceived realism [17] and increased presence [6, 12]. VF can be used to communicate both physical cues relating to the VE (e.g. vibrating objects [16], contacts [4], impacts [8], interaction forces [3], texture roughness [7]) as well as abstract cues (e.g. for guidance, notification or communication [5]). Many technologies can deliver VF in VR, such as wearable [5], grounded [19], hand-held [1], and even mid-air haptic devices [10].
Despite the wide use of VF delivered through ungrounded hand-held devices in VR, the approach is mostly restricted to monolithic VF [5]. This has the advantage of simplicity as it requires only a single actuator. However it remains inadequate for providing spatial information. That is, cues originating from different directions relative to the user are identical and thus indiscriminable for the user, unless a mapping is created between direction and waveform parameters.
Conversely, localized VF through multiple actuators is widely used to convey spatial information to the user, in particular in wearables (e.g. [5, 11, 12]) and surface haptics (e.g. [15]). In this paper, we begin to explore the possibilities offered by localized VF within handheld tangible objects. In particular, we focus on rendering spatialized impacts happening on a virtual hand-held object larger than the tangible held by the user. To render the impacts, the tangible houses two vibrotactile motors at its extremities. We hypothesize that by using two actuators, we can provide localized vibrotactile feedback which can inform the user about where the impact occurred on the larger virtual object they are manipulating (see Fig. 1).
In early work on impact rendering in interactions within VEs, Wellman et al. [21] used a data-driven approach to play back recorded impact vibrations during virtual contacts on a voice-coil actuator embedded into a grounded force-feedback device handle. Okamura et al. expanded on this, compiling a vibration waveform library for impacts generated by fitting a simplified vibration model based on an exponentially decaying sinusoid to recorded impact data [14]. Because this model (see Sect. 2.2) provided an interesting compromise between perceived realism, impact property discrimination, and computing requirements, it has since been widely adopted in interactions with VEs [13, 19]. Some work on spatialization in VR was performed by Gongora et al. [9]. They studied vibrotactile impacts delivered in a bimanual task using a pair of monolithic handheld vibrotactile devices, with the aim of rendering localized vibrotactile impacts along a virtual bar connecting both hands. There have also been a few research attempts at systems spatializing vibrotactile cues inside hand-held devices using multiple vibrotactors [18] or asymmetric vibrations [20] but to our knowledge none have been leveraged in VR interactions.
2 Experimental Design
2.1 Research Questions and Hypotheses
We seek to provide VF to render impacts between one manipulated virtual object and other virtual objects in a VE. Our question concerns the extent to which spatializing impact cues by distributing them between two actuators embedded in a cylindrical tangible handle (see Fig. 1-B) is effective in providing users with information on impact direction. We also seek to understand how this approach affects perceived realism and impact properties, and whether it is compatible with existing approaches to rendering impact distance in a setup using a single actuator (e.g. [9, 19]). To investigate this, we compare distance and direction discrimination performances, as well as perceived realism and virtual object material properties in VR, using different impact vibration models (see Table 1). We formulate the following hypotheses:
- H1:
-
Spatialization of impacts in hand by assigning impact waveforms to distinct vibrotactors will allow discrimination of impact direction, regardless of the chosen impact vibration model.
- H2:
-
Impact models coding distance with more redundant parameters (see Sect. 2.2 for the details of the models) will yield better distance discrimination performance.
(A) Virtual rod manipulated in the experiment with 4 possible impact distances extending symmetrically around the virtual hand. \(x_{th}\) and \(x_p\) respectively denote the thumb and pinkie side actuator positions. (B) Possible evolution of vibration amplitude A, decay \(\beta \) and frequency \(f=2\pi \omega \) as a function of impact distance for both actuators. Values were determined based on literature and a pilot study. We do not consider any impact occurring within the hand, hence the null values between \(x_{th}\) and \(x_p\).
2.2 Rendering Impacts Distance and Direction
We use the simplified impact vibration model introduced by Okamura et al. [14], where \(\alpha (x,t)\) denotes the waveform amplitude at the instant t for an impact at a distance x from the hand (see Fig. 2): \(\alpha (x,t) = A(x) e^{-\beta (x)t} sin(\omega (x) t).\) In realistic impacts, the peak amplitude A, decay \(\beta \) and angular frequency \(\omega \) would all be functions of impact distance as well as impact dynamics and properties of the materials involved. However, such impact models can sometimes be less effective at communicating usable information on impact distance [19]. An alternative is to select a subset of model parameters (A, \(\beta \), \(\omega \)) to encode impact distance, possibly leaving the remainder free for encoding other impact properties. Given these three parameters, there are seven different possibilities (see Table 1) for encoding impact distance (see Fig. 2).
2.3 Materials and Methods
To investigate the formulated hypotheses, we designed a pair of experiments assessing impact direction and distance perception in VR.
Hardware. Subjects sat at a table, wearing an HTC Vive Pro head-mounted display (HMD). They held the vibrotactile handle in their dominant hand which was tracked using an HTC Vive Tracker attached using an adhesive fixture to keep the palm and inside of the fingers unobstructed. They used an HTC Vive Controller held in their non-dominant hand to answer experimental questions (see Fig. 3-A). The handle was equipped with a pair of symmetrically mounted Actronika HapCoil One voice-coil actuators (see Fig. 1-A).
Experimental Task. The common experimental task for both experiments was inspired from Sreng et al. [19]. Subjects were asked to hold the tangible handle in their dominant hand. They observed the VE showing their virtual hand holding a virtual rod with the same diameter as the tangible handle but extending symmetrically outward 0.5m beyond the edges of the tangible handle. By moving this virtual rod up and down, it could impact a lightweight and unconstrained object at one of four distances \(d_i = {0.05, 0.2, 0.35, 0.50}\,\mathrm {m}\) from either the thumb or the pinkie side of the hand (see Fig. 2-B,C). These impacts were rendered according to one of the impact models summarized in Table 1. During the experiment, the impacted object was obstructed from view so as to provide no visual feedback of the impact location (see Fig. 3-C). Subjects placed the rod at the starting location, then were prompted to move it downward. On the way down, the stick impacted a first virtual object which appeared randomly on the left or right at one of the distances \(d_i\). Upon reaching the target location, subjects were prompted to return the stick to the start location and repeat the process. A second object appeared on the same side as the first, at one of the four possible distances, generating a second impact, after which subjects answered a pair of experimental questions:
- Q1:
-
Which side did the impacts occur on? (Left/Right)
- Q2:
-
Was the second impact further away from the hand than the first? (Y/N)
Experimental Design and Protocol. For achieving a shorter experiment, we split our investigation into two identical experiments containing 4 blocks each. Impacts were rendered respectively using Amp, Dec, AmpDec, AmpDecFreq for experiment 1 and Freq, AmpFreq, DecFreq, AmpDecFreq for experiment 2 (see Table 1). 24 subjects (19 m., 5 f., ages 21–30 (Mean:24.9 y), 20 right-handed) took part in the study after providing written informed consent. Subjects were randomly assigned to one of the two experiments.
In each experiment, subjects first performed a familiarisation task where the VE was fully visible, showing the hand-held virtual stick and the impacted virtual objects (see Fig. 3-B). During this task we ensured that subjects moved at a similar speed, though the vibration did not depend on it. They were informed that the rod and impacted object properties might vary during the course of the subsequent experiment. Subjects filled out an initial questionnaire indicating personal data and prior experience with haptics, VR and perception studies.
Each experiment contained one block per impact model, whose order was counterbalanced between subjects. Within each block, subjects performed 3 repetitions of the task for each of the 16 combinations of impact distances occurring on either side, totalling 96 trials presented in a fully random order. Post-block questionnaires assessed perception of the stick and impacted object’s material and geometric properties, their variability, and perceived impact realism.
3 Results
Impact directions were consistently correctly identified between 94% and 97% of the time across all impact models. Most errors occurred for pairs of low amplitude and duration stimuli.
To test for H2, subjects were assigned, for each impact model, to one of two groups based on whether they interpreted the impact model as intended (increased impact distance perceived as an increased impact distance) or in an inverted manner (increased impact distance perceived as a decreased impact distance). Inversion rates (percentage of subjects interpreting an increase in impact distance as a decrease) were around 50% for all models not involving Freq, and varied between 92% and 100% for all models involving Freq.
We then computed the 75%-just-noticeable-difference (JND) for distance discrimination as a Weber fraction for each subject by fitting cumulative Gaussians to the data. Finally, we compared the distribution of JNDs across impact vibration models (see Fig. 4). Data from the experiment 1 (Fig. 4-B) were not normally distributed, and a Friedman test showed no significant differences between conditions. Data from the experiment 2 were normally distributed, and a 2-way ANOVA showed a significant effect of impact model (\(F(3) = 4.132\), \(p = 0.021\)) but no significant differences between participants. A post-hoc Tukey HSD test revealed the only significant difference to lie between the JNDs for the Freq and AmpDecFreq conditions (\(p=0.016\)).
Rod properties were rated most consistent (median 2 of 7) in all conditions but Amp (median 3 of 7) and AmpDecFreq (median 4 of 7), however none of these differences were significant. The properties reported as changing between trials were rod material (Freq, AmpDec, AmpDecFreq), stiffness (all models except Dec), length (Dec, AmpDec, AmpFreq), fill (Dec, AmpDec), weight (Freq, AmpDecFreq). Subjectively reported rod materials were dominated by “metal” and “plastic” for all models involving Amp, as well as the Dec model, with qualifiers such as “resonating” and “tube”. Models involving Freq but not Amp yielded more “wood” and “plastic” responses, with qualifiers such as “soft”, “damped” and “warm”. AmpDecFreq yielded an almost even mix of all three material categories. Realism was consistently rated as average across all models (median 3 of 7) and was considered slightly variable across all models (median 3 of 7).
4 Discussion
The impact direction identification rates between 94% and 97% indicate that regardless of the chosen impact model, spatializing the impacts between two actuators allowed subjects to correctly and intuitively identify the side on which the impact occurred with a high degree of accuracy. Hypothesis H1 is therefore verified. Looking at inversion rates, it is interesting to note that all models involving Freq tended to be systematically inverted (92% to 100% of subjects perceived an increase in distance as a decrease) which would indicate the evolution of \(\omega \) may be the cause for this.
Weber fractions for distance discrimination were consistently high across all impact models except AmpDecFreq (m = 0.17), ranging from 0.6 (DecFreq, experiment 2) to 1.32 (Amp, experiment 1). This indicates that while distance discrimination was mostly possible, it was far from an easy task. The only statistically significant difference observed (Freq-AmpDecFreq, experiment 2) is in favor of hypothesis H2, and the mean JNDs seem to also support the hypothesis. However, given the poor performance of AmpDecFreq in experiment 1 and the fact that none but one of the differences are statistically significant, we cannot conclude that H2 is supported. This may be due to H2 being wrong, or to flaws in the stimulus or experimental task design. If H2 is not verified, there may be a lot of headroom to encode various impact properties by distributing them across different parameters without adversely impacting performance.
The high inversion rates due to using \(\omega \) as a parameter led us to hypothesize that models combining \(\omega \) with A, \(\beta \) or both may have been confusing to half the subjects that did not invert their interpretation of Amp and Dec. This hypothesis cannot be easily tested because subjects that performed Amp and Dec did not perform AmpFreq and DecFreq. Yet, analysing the results from experiment 1 revealed that 6 out of 12 subjects had inverted both Amp and Dec while 5 of 12 had not (the remaining subject inverted only one of both models). By looking at the JNDs for each of these groups of subjects in the AmpDecFreq condition, we note that the group that inverted both Amp and Dec performed better at AmpDecFreq (JNDs: 0.07 to 1.96, mean = 0.83) than the group that did not invert Amp and Dec (JNDs: 2.15 to 9.33, mean = 4.63). This would tend to support our interpretation and argue for the need to redesign the function \(\omega (x)\) in our rendering approach. Furthermore, it may be necessary to consider the frequency dependence of vibration amplitude perception in such a redesign. However, given the very small sample size, this conclusion must be seen as tentative.
The spread in JNDs between subjects indicates a large inter-subject variability in the ability to perform the task. During the experiment, several subjects noted that the task was really difficult until they “chose” a way to understand the mapping of the stimuli to impact distance. Thus, we believe this variability shows that subjects displayed different capacities for adapting to the difficulty of the experimental tasks and choosing an effective response strategy. This means that the haptic representation of impact distance is far from intuitive or natural with the chosen models, although AmpDecFreq shows some promise in experiment 2. This may indicate poor model design, or the fact distance discrimination is really hard without any context such as e.g. visual feedback of impacts.
All models were perceived as equally (un)realistic, indicating that either the impact model used is unrealistic, that spatialization impacted realism, or both.
5 Conclusion and Perspectives
We presented an investigation into the use of spatialized in-hand vibrotactile feedback for VR interactions. Our study focused on the ability of a handle equipped with two vibrotactors to deliver realistic, discriminable and understandable sensations of impacts through which users could determine the location (direction and distance) of impacts on a virtual manipulated object.
We determined direction identification scores, JNDs for impact distance, and perceived impact realism for 7 impact vibration models. Results showed excellent direction identification performances, but distance discrimination performances were mediocre. Impacts were perceived as only moderately realistic, which may be due to the impact models studied as well as our spatialization technique.
In future work, we plan to investigate whether differently distributing the vibrations between both actuators can improve perceived realism and consistency between impacts while preserving distance and direction discrimination performance. This study also highlighted certain avenues for improving the perception of vibration impact models which we intend to investigate. Finally, we plan to extend the approach to 2D and 3D spatialization using more actuators.
Because of the good direction discrimination performance observed in this initial study, we believe there is potential for using multiple actuators in manipulated tangible objects or controllers. This seems particularly promising for VR applications which could benefit from the use of in-hand directional cues.
References
Adilkhanov, A., et al.: Vibero: vibrotactile stiffness perception interface for virtual reality. IEEE RAL 5(2), 2785–2792 (2020)
Brasen, P.W., Christoffersen, M., Kraus, M.: Effects of vibrotactile feedback in commercial virtual reality systems. In: Brooks, A.L., Brooks, E., Sylla, C. (eds.) ArtsIT/DLI -2018. LNICST, vol. 265, pp. 219–224. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-06134-0_25
Cheng, L.T., et al.: Vibrotactile feedback in delicate virtual reality operations. In: Proceeding ACM ICM, pp. 243–251 (1997)
Chinello, F., et al.: A three revolute-revolute-spherical wearable fingertip cutaneous device for stiffness rendering. IEEE Trans. Haptics 11(1), 39–50 (2017)
Choi, S., et al.: Vibrotactile display: perception, technology, and applications. Proc. IEEE 101(9), 2093–2104 (2012)
Cooper, N., et al.: The effects of substitute multisensory feedback on task performance and the sense of presence in a virtual reality environment. PloS One 13(2), e0191846 (2018)
Culbertson, H., et al.: The penn haptic texture toolkit for modeling, rendering, and evaluating haptic virtual textures. Tech. Rep. (2014)
García-Valle, G., et al.: Evaluation of presence in virtual environments: haptic vest and user’s haptic skills. IEEE Access 6, 7224–7233 (2017)
Gongora, D., Nagano, H., Konyo, M., Tadokoro, S.: Experiments on two-handed localization of impact vibrations. In: Hasegawa, S., Konyo, M., Kyung, K.-U., Nojima, T., Kajimoto, H. (eds.) AsiaHaptics 2016. LNEE, vol. 432, pp. 33–39. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-4157-0_6
Howard, T., et al.: Pumah: pan-tilt ultrasound mid-air haptics for larger interaction workspace in virtual reality. IEEE Trans. Haptics 13(1), 38–44 (2019)
de Jesus Oliveira, V., et al.: Designing a vibrotactile head-mounted display for spatial awareness in 3d spaces. IEEE TVCG 23(4), 1409–1417 (2017)
Kaul, O.B., Meier, K., Rohs, M.: Increasing presence in virtual reality with a vibrotactile grid around the head. In: Bernhaupt, R., Dalvi, G., Joshi, A., K. Balkrishan, D., O’Neill, J., Winckler, M. (eds.) INTERACT 2017. LNCS, vol. 10516, pp. 289–298. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68059-0_19
Kuchenbecker, K., et al.: Improving contact realism through event-based haptic feedback. IEEE TVCG 12(2), 219–230 (2006)
Okamura, A., et al.: Vibration feedback models for virtual environments. In: Proceeding IEEE ICRA, vol. 1, pp. 674–679 (1998)
Pantera, L., et al.: Multitouch vibrotactile feedback on a tactile screen by the inverse filter technique: vibration amplitude and spatial resolution. IEEE Trans. Haptics 13(3), 493–503 (2020)
Passalenti, A., et al.: No strings attached: force and vibrotactile feedback in a virtual guitar simulation. In: Proceeding IEEE VR, pp. 1116–1117 (2019)
Peng, Y., et al.: Walkingvibe: reducing virtual reality sickness and improving realism while walking in vr using unobtrusive head-mounted vibrotactile feedback. In: Proceeding CHI, pp. 1–12 (2020)
Ryu, D., et al.: T-hive: vibrotactile interface presenting spatial information on handle surface. In: Proceeding IEEE ICRA, pp. 683–688 (2009)
Sreng, J., Lécuyer, A., Andriot, C.: Using vibration patterns to provide impact position information in haptic manipulation of virtual objects. In: Ferre, M. (ed.) EuroHaptics 2008. LNCS, vol. 5024, pp. 589–598. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-69057-3_76
Tappeiner, H., et al.: Good vibrations: asymmetric vibrations for directional haptic cues. In: Proceeding IEEE WHC, pp. 285–289 (2009)
Wellman, P., et al.: Towards realistic vibrotactile display in virtual environments. In: Proceeding ASME Dynamic Systems & Control Division, vol. 57, pp. 713–718 (1995)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this paper
Cite this paper
Cabaret, PA., Howard, T., Pacchierotti, C., Babel, M., Marchal, M. (2022). Perception of Spatialized Vibrotactile Impacts in a Hand-Held Tangible for Virtual Reality. In: Seifi, H., et al. Haptics: Science, Technology, Applications. EuroHaptics 2022. Lecture Notes in Computer Science, vol 13235. Springer, Cham. https://doi.org/10.1007/978-3-031-06249-0_30
Download citation
DOI: https://doi.org/10.1007/978-3-031-06249-0_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-06248-3
Online ISBN: 978-3-031-06249-0
eBook Packages: Computer ScienceComputer Science (R0)