A Lens-Based Extension of Raycasting for Accurate Selection in Dense 3D Environments | SpringerLink
Skip to main content

A Lens-Based Extension of Raycasting for Accurate Selection in Dense 3D Environments

  • Conference paper
  • First Online:
Human-Computer Interaction – INTERACT 2021 (INTERACT 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12935))

Included in the following conference series:

Abstract

In mixed environments, the selection of distant 3D objects is commonly based on raycasting. To address the limitations of raycasting for selecting small targets in dense environments, we present RayLens an extended raycasting technique. RayLens is a bimanual selection technique, which combines raycasting with a virtual 2D magnification lens that can be remotely moved in 3D space using the non-dominant hand. We experimentally compared RayLens with a standard raycasting technique as well as with RaySlider an extension of raycasting based on a target expansion mechanism whose design is akin to RayLens. RayLens is considerably more accurate and more than 1.3\(\times \) faster than raycasting for selecting small targets. Furthermore, RayLens is more than 1.6\(\times \) faster than RaySlider in dense environments. Qualitatively, RayLens is easy-to-learn and the preferred technique making it a good candidate technique for general public usage.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 13727
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 17159
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    With the 1:3 ratio, a 30 cm motion of the clicker allows users to move the lens from its initial position (30 cm in front of the scene) to the position of the farthest objects (90 cm from the initial position of the lens).

  2. 2.

    According to the survey [23], NASA-TLX and Raw TLX perform equally well.

References

  1. Agarwal, B., Stuerzlinger, W.: Widgetlens: a system for adaptive content magnification of widgets. In: Proceedings of the 27th International BCS Human Computer Interaction Conference. BCS-HCI 2013, Swindon, GBR (2013)

    Google Scholar 

  2. Bacim, F., Kopper, R., Bowman, D.A.: Design and evaluation of 3D selection techniques based on progressive refinement. Int. J. Hum.-Comput. Stud. 71(7–8), 785–802 (2013). https://doi.org/10.1016/j.ijhcs.2013.03.003

    Article  Google Scholar 

  3. Balakrishnan, R., Kurtenbach, G.: Exploring bimanual camera control and objectmanipulation in 3D graphics interfaces. In: Proceedings of CHI 1999, pp. 56–62. New York, NY, USA (1999). https://doi.org/10.1145/302979.302991

  4. Baloup, M., Pietrzak, T., Casiez, G.: Raycursor: A 3D pointing facilitation technique based on raycasting. In: Proceedings of CHI 2019, pp. 1–12. NewYork, NY, USA (2019). https://doi.org/10.1145/3290605.3300331

  5. Bane, R., Hollerer, T.: Interactive tools for virtual x-ray vision in mobile augmented reality. In: Proceedings of ISMAR 2004, pp. 231–239. USA (2004). https://doi.org/10.1109/ISMAR.2004.36

  6. Banerjee, A., Burstyn, J., Girouard, A., Vertegaal, R.: Pointable: an in-air pointing technique to manipulate out-of-reach targets on tabletops. In: Proceedings of ITS 2011, pp. 11–20. NY, USA (2011). https://doi.org/10.1145/2076354.2076357

  7. Bier, E.A., Stone, M.C., Pier, K., Buxton, W., DeRose, T.D.: Toolglass and magic lenses: the see-through interface. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, pp. 73–80. SIGGRAPH 1993, New York, NY, USA (1993). https://doi.org/10.1145/166117.166126

  8. Blanch, R., Ortega, M.: Benchmarking pointing techniques with distractors: adding a density factor to fitts’ pointing paradigm. In: Proceedings of CHI 2011, pp. 1629–1638 (2011). https://doi.org/10.1145/1978942.1979180

  9. Brown, L.D., Hua, H.: Magic lenses for augmented virtual environments. IEEE Comput. Graph. Appl. 26(4), 64–73 (2006). https://doi.org/10.1109/MCG.2006.84

  10. Brown, L.D., Hua, H., Gao, C.: A widget framework for augmented interaction inscape. In: Proceedings of UIST 2003, pp. 1–10. New York, NY, USA (2003). https://doi.org/10.1145/964696.964697

  11. Cashion, J., Wingrave, C., LaViola, J.J., Jr.: Dense and dynamic 3D selection for game-based virtual environments. IEEE TVCG 18(4), 634–642 (2012). https://doi.org/10.1109/TVCG.2012.40

    Article  Google Scholar 

  12. Chapuis, O., Dragicevic, P.: Effects of motor scale, visual scale, andquantization on small target acquisition difficulty. ACM Trans. Comput.-Hum. Interact. 18(3), 1–32 (2011). https://doi.org/10.1145/1993060.1993063

  13. Cockburn, A., Karlson, A., Bederson, B.B.: A review of overview+detail,zooming, and focus+context interfaces. ACM Computing Survey, vol. 41, no. 1 (2009). https://doi.org/10.1145/1456650.1456652

  14. Dedual, N.J., Oda, O., Feiner, S.K.: Creating hybrid user interfaces with a 2D multi-touch tabletop and a 3D see-through head-worn display. In: Proceedings of ISMAR 2011, pp. 231–232. USA (2011). https://doi.org/10.1109/ISMAR.2011.6092391

  15. Fuhrmann, A., Gröller, E.: Real-time techniques for 3D flow visualization. In: Proceedings of VIS 1998, pp. 305–312. Washington, DC, USA (1998). https://doi.org/10.5555/288216.288296

  16. Gasteiger, R., Neugebauer, M., Beuing, O., Preim, B.: The flowlens: afocus-and-context visualization approach for exploration of blood flow incerebral aneurysms. IEEE TVCG, pp. 2183–2192 (2011). https://doi.org/10.1109/TVCG.2011.243

  17. Grossman, T., Balakrishnan, R.: The bubble cursor: enhancing target acquisition by dynamic resizing of the cursor’s activation area. In: Proceedings of CHI 2005, pp. 281–290. NY, USA (2005). https://doi.org/10.1145/1054972.1055012

  18. Grossman, T., Balakrishnan, R.: The design and evaluation of selection techniques for 3D volumetric displays. In: Proceedings of UIST 2006, pp. 3–12. New York, NY, USA (2006). https://doi.org/10.1145/1166253.1166257

  19. Guiard, Y.: Asymmetric division of labor in human skilled bimanual action. J. Motor Behav. 19(4), 486–517 (1987). https://doi.org/10.1080/00222895.1987.10735426, pMID: 15136274

  20. Guillon, M., Leitner, F., Nigay, L.: Investigating visual feedforward for target expansion techniques. In: Proceedings of CHI 2015, pp. 2777–2786. NewYork, NY, USA (2015). https://doi.org/10.1145/2702123.2702375

  21. Gutwin, C.: Improving focus targeting in interactive fisheye views. In: Proceedings of CHI 2002, pp. 267–274. New York, NY, USA (2002). https://doi.org/10.1145/503376.503424

  22. de Haan, G., Koutek, M., Post, F.H.: Intenselect: using dynamic object rating for assisting 3D object selection. In: Proceedings of the 11th Eurographics Conference on Virtual Environments, pp. 201–209. EGVE 2005, Goslar, DEU (2005). https://doi.org/10.5555/2385984.2386013

  23. Hart, S.G.: Nasa-task load index (nasa-tlx); 20 years later. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 50, pp. 904–908. Sage publications Sage CA: Los Angeles, CA (2006). https://doi.org/10.1177/154193120605000909

  24. Kabbash, P., Buxton, W., Sellen, A.: Two-handed input in a compound task. In: Proceedings of CHI 1994, pp. 417–423. New York, NY, USA (1994). https://doi.org/10.1145/191666.191808

  25. Käser, D.P., Agrawala, M., Pauly, M.: Fingerglass: efficient multiscale interaction on multitouch screens. In: Proceedings of CHI 2011, pp. 1601–1610. New York, NY, USA (2011). https://doi.org/10.1145/1978942.1979175

  26. Kluge, S., Gladisch, S., Freiherr von Lukas, U., Staadt, O., Tominski, C.: Virtual lenses as embodied tools for immersive analytics. In: GI VR/AR Workshop (2020). https://doi.org/10.18420/vrar2020_8

  27. Kopper, R., Bacim, F., Bowman, D.A.: Rapid and accurate 3D selection by progressive refinement. In: Proceedings of 3DUI 2011, pp. 67–74. USA (2011). https://doi.org/10.5555/2013881.2014213

  28. Laukkanen, J., Isokoski, P., Räihä, K.J.: The cone and the lazy bubble: two efficient alternatives between the point cursor and the bubble cursor. In: Proceedings of CHI 2008, pp. 309–312. New York, NY, USA (2008). https://doi.org/10.1145/1357054.1357107

  29. Lee, J.J., Park, J.M.: 3D mirrored object selection for occluded objects in virtual environments. IEEE Access 8, 200259–200274 (2020). https://doi.org/10.1109/ACCESS.2020.3035376

    Article  Google Scholar 

  30. Looser, J., Billinghurst, M., Cockburn, A.: Through the looking glass: the use of lenses as an interface tool for augmented reality interfaces. In: Proceedings of the 2nd International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia, pp. 204–211. GRAPHITE 2004, New York, NY, USA (2004). https://doi.org/10.1145/988834.988870

  31. Looser, J., Billinghurst, M., Grasset, R., Cockburn, A.: An evaluation of virtual lenses for object selection in augmented reality. In: Proceedings of the 5th International Conference on Computer Graphics and Interactive Techniques in Australia and Southeast Asia, pp. 203–210. GRAPHITE 2007, New York, NY, USA (2007). https://doi.org/10.1145/1321261.1321297

  32. Looser, J., Grasset, R., Billinghurst, M.: A 3D flexible and tangible magiclens in augmented reality. In: Proceedings of ISMAR 2007, pp. 1–4. USA (2007). https://doi.org/10.1109/ISMAR.2007.4538825

  33. Lu, Y., Yu, C., Shi, Y.: Investigating bubble mechanism for ray-casting to improve 3D target acquisition in virtual reality. In: Conference on VR and 3D User Interfaces, pp. 35–43. USA (2020). https://doi.org/10.1109/VR46266.2020.00021

  34. Mendez, E., Kalkofen, D., Schmalstieg, D.: Interactive context-driven visualization tools for augmented reality. In: Proceedings of ISMAR 2006, pp. 209–218 (2006). https://doi.org/10.1109/ISMAR.2006.297816

  35. Montano, R., Nguyen, C., Kazi, R., Subramanian, S., DiVerdi, S., Martinez Plasencia, D.: Slicing-volume: hybrid 3D/2D multi-target selection technique for dense virtual environments, pp. 53–62 (2020). https://doi.org/10.1109/VR46266.2020.1581198507712

  36. Mota, R.C.R., Rocha, A., Silva, J.D., Alim, U., Sharlin, E.: 3De interactive lenses for visualization in virtual environments. In: 2018 IEEE Scientific Visualization Conference (Scientific Visualization), pp. 21–25 (2018). https://doi.org/10.1109/SciVis.2018.8823618

  37. Mott, M.E., Wobbrock, J.O.: Beating the bubble: using kinematic triggering in the bubble lens for acquiring small, dense targets. In: Proceedings of CHI 2014, pp. 733–742. NY, USA (2014). https://doi.org/10.1145/2556288.2557410

  38. Olwal, A., Feiner, S.: The flexible pointer: An interaction technique for augmented and virtual reality (2012)

    Google Scholar 

  39. Payne, A.R., Plimmer, B., McDaid, A., Luxton-Reilly, A., Davies, T.C.: Expansion cursor: a zoom lens that can be voluntarily activated by the user at every individual click. In: Proceedings of the 28th Australian Conference on Computer-Human Interaction, pp. 81–90. OzCHI 2016, New York, NY, USA (2016). https://doi.org/10.1145/3010915.3010942

  40. Pietriga, E., Appert, C.: Sigma lenses: focus-context transitions combining space, time and translucence. In: Proceedings of CHI 2008, pp. 1343–1352. NewYork, NY, USA (2008). https://doi.org/10.1145/1357054.1357264

  41. Pindat, C., Pietriga, E., Chapuis, O., Puech, C.: Drilling into complex 3D models with gimlenses. In: Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology, pp. 223–230. VRST 2013, New York, NY, USA (2013). https://doi.org/10.1145/2503713.2503714

  42. Plasson, C., Cunin, D., Laurillau, Y., Nigay, L.: Tabletop ar with hmd and tablet: a comparative study for 3D selection. In: Proceedings of ISS 2019, pp. 409–414 (2019). https://doi.org/10.1145/3343055.3360760

  43. Plasson, C., Cunin, D., Laurillau, Y., Nigay, L.: 3D tabletop ar: a comparison of mid-air, touch and touch+mid-air interaction. In: Proceedings of AVI 2020 (2020). https://doi.org/10.1145/3399715.3399836

  44. Ramos, G., Cockburn, A., Balakrishnan, R., Beaudouin-Lafon, M.: Pointing lenses: facilitating stylus input through visual-and motor-space magnification. In: Proceedings of CHI 2007, pp. 757–766. New York, NY, USA (2007). https://doi.org/10.1145/1240624.1240741

  45. Ramos Mota, R.C., Cartwright, S., Sharlin, E., Hamdi, H., Costa Sousa, M., Chen, Z.: Exploring immersive interfaces for well placement optimization in reservoir models. In: Proceedings of SUI 2016, pp. 121–130 (2016). https://doi.org/10.1145/2983310.2985762

  46. Reipschläger, P., Dachselt, R.: Designar: immersive 3D-modeling combiningaugmented reality with interactive displays. In: Proceedings of ISS 2019, pp. 29–41. New York, NY, USA (2019). https://doi.org/10.1145/3343055.3359718

  47. Ro, H., et al.: A dynamic depth-variable ray-casting interface for object manipulation in ar environments and Cybernetics, Man, pp. 2873–2878 (2017). https://doi.org/10.1109/SMC.2017.8123063

  48. Schmalstieg, D., Höllerer, T.: Augmented reality: principles and practice. In: 2017 IEEE Virtual Reality (VR), pp. 425–426 (2017). https://doi.org/10.1109/VR.2017.7892358

  49. Spindler, M., Dachselt, R.: Paperlens: advanced magic lens interaction above the tabletop. In: Proceedings of ITS 2009. New York, NY, USA (2009). https://doi.org/10.1145/1731903.1731948

  50. Spindler, M., Tominski, C., Schumann, H., Dachselt, R.: Tangible views for information visualization. In: Proceedings of ITS 2010, pp. 157–166. NewYork, NY, USA (2010). https://doi.org/10.1145/1936652.1936684

  51. Stoev, S., Schmalstieg, D., Straer, W.: Two-handed through-the-lens-techniques for navigation in virtual environments (2001). https://doi.org/10.2312/EGVE/EGVE01/051-060

  52. Stoev, S., Schmalstieg, D., Straßer, W.: The through-the-lens metaphor: taxonomy and application, pp. 285–286 (2002). https://doi.org/10.1109/VR.2002.996541

  53. Tominski, C., Gladisch, S., Kister, U., Dachselt, R., Schumann, H.: Interactive lenses for visualization: an extended survey. Comput. Graph. Forum 36(6), 173–200 (2017). https://doi.org/10.1111/cgf.12871

    Article  Google Scholar 

  54. Tong, X., Li, C., Shen, H.W.: Glyphlens: view-dependent occlusion management in the interactive glyph visualization. IEEE TVCG 23(1), 891–900 (2017). https://doi.org/10.1109/TVCG.2016.2599049

    Article  Google Scholar 

  55. Traoré, M., Hurter, C., Telea, A.: Interactive obstruction-free lensing for volumetric data visualization. IEEE TVCG 25(1), 1029–1039 (2019). https://doi.org/10.1109/TVCG.2018.2864690

    Article  Google Scholar 

  56. Vanacken, L., Grossman, T., Coninx, K.: Exploring the effects of environment density and target visibility on object selection in 3D virtual environments. In: Proceedings of 3DUI 2007 (2007). https://doi.org/10.1109/3DUI.2007.340783

  57. Vanacken, L., Grossman, T., Coninx, K.: Multimodal selection techniques for dense and occluded 3D virtual environments. Int. J. Hum.-Comput. Stud. 67(3), 37–255 (2009). https://doi.org/10.1016/j.ijhcs.2008.09.001

  58. Vickers, D.L.: Sorcerer’s Apprentice: Head-Mounted Display and Wand. Ph.D. thesis, vol. 10(5555/906408), aAI7310165 (1972)

    Google Scholar 

  59. Viega, J., Conway, M.J., Williams, G., Pausch, R.: 3D magic lenses. In: Proceedings of UIST 1996, pp. 51–58. New York, NY, USA (1996). https://doi.org/10.1145/237091.237098

  60. Vogel, D., Baudisch, P.: Shift: a technique for operating pen-based interfaces using touch. In: Proceedings of CHI 2007, pp. 657–666. New York, NY, USA (2007). https://doi.org/10.1145/1240624.1240727

  61. Wobbrock, J.O., Findlater, L., Gergle, D., Higgins, J.J.: The aligned rank transform for nonparametric factorial analyses using only anova procedures. Proc. CHI ’11, 143–146 (2011). https://doi.org/10.1145/1978942.1978963

    Article  Google Scholar 

Download references

Acknowledgement

We gratefully acknowledge the support of the AP2 project ANR-15-CE23-0001.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Carole Plasson , Dominique Cunin , Yann Laurillau or Laurence Nigay .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 10012 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Plasson, C., Cunin, D., Laurillau, Y., Nigay, L. (2021). A Lens-Based Extension of Raycasting for Accurate Selection in Dense 3D Environments. In: Ardito, C., et al. Human-Computer Interaction – INTERACT 2021. INTERACT 2021. Lecture Notes in Computer Science(), vol 12935. Springer, Cham. https://doi.org/10.1007/978-3-030-85610-6_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85610-6_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85609-0

  • Online ISBN: 978-3-030-85610-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics