Abstract
One crucial property of virtual reality (VR) is “self-projection”. This means that an avatar representing a user in a virtual space is the user itself with a higher level of reality. It can be referred to as a “sense of embodiment (SoE)”. Using head-mounted display (HMD), a three-dimensional (3D) virtual space that is generated by a computer with visual sense can be recognized. Besides, if the user can touch objects in the virtual space and feel the haptic sense on one’s hands using haptic devices, the SoE will undoubtedly increase. However, since the workspace of the user’s hand in using the haptic device has limitations, the task performed in the virtual space differs from the task performed in the real space. Therefore, in this paper, we evaluate the degree of agreement between the performance of a task in a virtual space and real space through experiments consisting of the same task. As the haptic device for virtual space we used SPIDAR-GCC, which is a type of parallel wire haptic device. In the real space, we asked seven research participants to move a tennis ball and a cola-can placed on a desk to a prescribed position. With regard to the experiments in the virtual space, we developed two 3D spaces where a tennis ball or a cola-can are placed on a desk. Then, using HMD and SPIDAR-GCC, we asked the participants to move these objects to a prescribed position. We recorded these tasks in the form of videos and analyzed them. The result of the analyses revealed that there are significant differences in the manner in which these objects were moved.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
1.1 Background and Purpose
One crucial property of virtual reality (VR) is “self-projection”. This means that an avatar representing a user in a virtual space is the user itself with a higher level of reality. It can be rephrased as “sense of embodiment (SoE)”. By using a head-mounted display (HMD), it is possible to recognize a three-dimensional (3D) virtual space generated by a computer that has a visual sense. Also, if the user can touch objects in the virtual space and feel the haptic sense on one’s own hands by using a haptic device, then the SoE will increase. Therefore, the feeling when touching the object in the virtual space must be realistically reproduced consistently. However, since the workspace of the user’s hand using the haptic device has limitations, the task performed in the virtual space differs from the one performed in the real space. Therefore, in this paper, we focus on the examination of the degree of coincidence of human work in virtual and real spaces that are constructed to make the two as identical as possible. The final goal of this research is to evaluate the degree of the agreement between a task in the real and virtual space as well as to investigate the SoE. With regard to the task in the virtual space, we utilized SPIDAR-GCC, which is a haptic device that can present 6DOF force sense, and we also used HMD.
1.2 Outline of Research
In the real space, the research participants performed the task of moving the object a distance of 60 cm away, while in the virtual space, they moved the object a distance of 60 cm away using the haptic device SPIDAR-GCC and the HMD. Comparing the performances of the task performed in different kinds of spaces, we analyzed and evaluated the differences in distance and trajectory of the objects using captured videos.
2 Proposed Device and Application
2.1 SPIDAR-GCC
Sato et al. [1] developed a space interface device for artificial reality (SPIDAR) for presenting force sense using parallel wire scheme. Strings are tied to a force sense presentation part called end effector, and they are controlled by motors to generate tension. Owing to this tension, the user can feel force sense via the end effector. Besides, the position and orientation of the end effector can be obtained from the length of the strings. In this paper, we employed the SPIDAR-GCC shown in Fig. 1, which can present a force sense of 6DOF. The sphere grips serve as the end effector. Table 1 shows the specifications of SPIDAR-GCC. This specification is described in the source code “DeviceSpec.cpp” indicating the device specification of SPIDAR.
2.2 FOVE
We use an HMD, which a FOVE developed by FOVE Inc., to display the virtual space generated by the computer. Figure 2 shows the FOVE while Table 2 shows its specifications.
2.3 Developed Application
In this research, using SPIDAR-GCC and HMD, a task was conducted to grab and move objects in the virtual space. We developed virtual space using Unity [3] which is one of the developing environments for 3D programs. Figure 3 shows an example of a developed virtual space.
When the end effector of SPIDAR-GCC is grasped and moved by hand, the virtual hand shown by the white sphere moves at a distance multiplication factor of 20. Also, while the user is pressing a button on the end effector with one’s finger, an object can be grabbed. When the button is released, the object can be released; this makes it possible to grab, move, and release the object.
We chose a tennis ball and unopened cola-can. As reasons for choosing, both are well-recognized in terms of size and weight, and we thought that they are clues for the distance when objects are moving. In particular, the tennis ball is similar to the shape and size of the end effector of SPIDAR-GCC. Figures 4 and 5 show 3D models of tennis balls and cola-cans used in the virtual space. Also, the size and weight of tennis balls and cola-cans used in the virtual space were set to be the same as those used in real space; Table 3 shows their size and weight.
To analyze the task of grasping and moving the object using videos, we set two cameras in the virtual space so that we can capture a video. One of the cameras captures the video displayed on the HMD, while the other captures the video from the left side of the workspace to record the trajectory of the moving object. Figures 6 and 7 show captured videos.
3 An Experiment of Motion Analysis
3.1 Experimental Method
Experiment in Real Space
In this experiment, the number of participants was seven. Figure 8 shows the real workspace. The procedure of the experiment is shown in (1) to (4) below.
-
(1)
Three cameras A, B, and C were installed for the following purposes:
Camera A: To record the eye movement of the research participant; it was placed left oblique in front of the research participant.
Camera B: To record the movement of the object; it was placed on the left side of the workspace.
Camera C: To record the movements of the participant’s arms; it was placed left oblique behind of the research participant.
-
(2)
The research participant moves the object from the initial position, i.e., where the object is placed, to another position that is 60 cm away. As the marker, a tape was stuck at the initial and terminal positions. The experimenter placed the object at the initial position.
-
(3)
Initially, the research participant is sitting on a chair with hand placed on his knees. With our cue “Start” the research participant grabbed the object with his right hand, moved to the terminal position, and placed it there. This made them grasp the distance of 60 cm. As shown in Table 4, the research participant alternately moved the object linearly and freely in every five times. The experimenter decided the time of the task end as the point when the research participant placed the tennis ball on the desk and released his hand off it.
-
(4)
After the research participant had finished the task with the tennis ball, the experimenter changes the tennis ball to a cola-can, and the research participant repeats the steps mentioned in (3).
Experiment in Virtual Space
Figure 9 shows the state of the experiment in virtual space using SPIDAR-GCC and HMD. In the same manner, as the work in the real space, research participant moves each of the tennis ball and cola-can from the initial position to another position 60 cm away. Table 4 shows the manner of movement to be followed. In this case, since we allow the research participants to decide the distance of 60 cm, there were no markers on the initial and terminal positions. Initially, the research participant was sitting on a chair, wearing an HMD, and grasping the end effector of SPIDAR-GCC. With our cue “Start”, the research participant pushed the button on the end effector, grasped the object, and moved to the terminal position. We regarded task end time as the instance when the research participant placed the object on the desk and released his hand from it by releasing the button.
3.2 Experimental Results and Discussion
Figures 10 and 11 demonstrate the movement distance of the tennis ball and cola-can in real and virtual spaces, respectively. Figure 12 shows the average of movement distance of objects obtained from observing seven participants. Besides, Figs. 13 and 14 show the lifting height of the objects, while Fig. 15 shows the average of maximum lifting height of objects obtained from observing seven participants.
First, we discuss Figs. 10 and 11. It was found that many of the research participants moved the object farther than 60 cm in the virtual space; we thought this could be owing to the sense of distance of 60 cm learned in the real space and the distance multiplication factor of 20 times in the virtual space.
Next, we discuss Figs. 13 and 14. It was found that all the research participants lifted the object higher in the virtual space than in the real world. As one of the causes, they were looking at the workspace in an overhead view, so it was challenging for them to grasp the lifting height of the object visually. Therefore, probably, the object being lifted higher in the virtual space is influenced by the distance multiplication factor.
4 Conclusions and Future Work
In this paper, we analyzed the differences in the tasks in the real and virtual spaces using recorded videos. In the task performed in the virtual space, there was a tendency to move the object farther than 60 cm under the influence of the distance multiplication factor of the end effector of SPIDAR-GCC, and these are significant differences in the task in the real and virtual spaces. Furthermore, in our research, we set the multiplication factor to 20 times. The effect of this factor is evident in the difference in the task of moving objects. Therefore, in future research, we shall perform some experiments to adjust the distance multiplication factor for the task without conflicting feelings, and also explore the relationship between higher SoE and visual/haptic information. Also, it is considered that the space recognition in the virtual space is different from that in the real space; thus, the appearance of the object in the virtual space is different from its appearance in the real space. We shall also experimentally investigate this difference in our subsequent researches.
References
Sato, M., Hirata, Y., Kawarada, H.: Space interface device for artificial reality–SPIDAR. EICE Trans. Fundam. Electron. Commun. Comput. Sci. D J74-D(7), 887–894 (1991)
Home - FOVE Eye Tracking Virtual Reality Headset. https://www.getfove.com/. Accessed 21 Feb 2019
Unity Technologies. https://unity3d.com/. Accessed 21 Feb 2019
Acknowledgments
We would like to thank all the research participants. And also we would like to thank Enago (www.enago.jp) for the English language review. This work was supported by JSPS KAKENHI Grant Number JP17H01782.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Tasaka, Y., Ichimaru, H., Yamamoto, S., Sato, M., Yamaguchi, T., Harada, T. (2019). Analysis of Differences in the Manner to Move Objects in a Real and Virtual Space. In: Yamamoto, S., Mori, H. (eds) Human Interface and the Management of Information. Information in Intelligent Systems. HCII 2019. Lecture Notes in Computer Science(), vol 11570. Springer, Cham. https://doi.org/10.1007/978-3-030-22649-7_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-22649-7_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-22648-0
Online ISBN: 978-3-030-22649-7
eBook Packages: Computer ScienceComputer Science (R0)