Wand: $$360^{\circ }$$ video projection mapping using a $$360^{\circ }$$ camera | Virtual Reality Skip to main content
Log in

Wand: \(360^{\circ }\) video projection mapping using a \(360^{\circ }\) camera

  • Original Article
  • Published:
Virtual Reality Aims and scope Submit manuscript

Abstract

In a surrounding projection-based display environment (e.g., a dome theater), the viewers can enjoy a \(360^{\circ }\) video with a strong sense of immersion. Building a thriving immersive environment requires two sophisticated steps. First, to generate a single seamless screen, multiple projectors constituting the surrounding display should be carefully registered to the surface. Second, a \(360^{\circ }\) video should be mapped to the projection area by considering the display surface geometry and a sweet spot (i.e., a reference viewing position) to allow viewers to perceive the correct perspectives. In this study, Wand, a novel system utilizing a consumer \(360^{\circ }\) spherical camera as a calibration device, is proposed to efficiently solve these two issues. Wand first establishes correspondences between the \(360^{\circ }\) camera and projectors using structured light patterns, and then filters out any outliers using heuristic criteria. Next, by assuming that the camera is positioned in a sweet spot, Wand solves the geometric registration of the projectors by formulating it as a simple 2D grid mesh parameterization with the correspondence constraints. Consequently, each projector mesh is directly registered into the spherical coordinates, allowing each projector to easily render a perspective-correct view from a \(360^{\circ }\) video. We applied Wand to various environments of different dimensions and shapes. The results demonstrate that our method can be used to successfully build seamless and immersive displays and provide correct perspectives at a sweet spot.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availibility

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

Notes

  1. A sweet spot is a reference viewing position where viewers can experience the maximum sense of immersion. It is usually located in the center of a space, but it can be moved according to the creator’s intentions.

References

  • Mine MR, Van Baar J, Grundhofer A, Rose D, Yang B (2012) Projection-based augmented reality in disney theme parks. Computer 45(7):32–40

    Article  Google Scholar 

  • Sajadi B, Majumder A (2011a) Automatic registration of multi-projector domes using a single uncalibrated camera. In: Computer graphics forum. Wiley Online Library, vol 30, pp 1161–1170

  • Cruz-Neira C, Sandin DJ, DeFanti TA (1993) Surround-screen projection-based virtual reality: the design and implementation of the cave. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques, pp 135–142

  • Lee J, Lee S, Kim Y, Noh J (2016) Screenx: public immersive theatres with uniform movie viewing experiences. IEEE Trans Visual Comput Graphics 23(2):1124–1138

    Article  Google Scholar 

  • Grundhöfer A, Iwai D (2018) Recent advances in projection mapping algorithms, hardware and applications. In: Computer graphics forum. Wiley Online Library, vol 37, pp 653–675

  • Park S, Seo H, Cha S, Noh J (2015) Auto-calibration of multi-projector displays with a single handheld camera. In: 2015 IEEE Scientific visualization conference (SciVis), pp 65–72. IEEE

  • Raskar R, Van Baar J, Willwacher T, Rao S (2004) Quadric transfer for immersive curved screen displays. In: Computer graphics forum. Wiley Online Library, vol 23, pp 451–460

  • Bimber O, Raskar R (2005) Spatial augmented reality: merging real and virtual worlds. A. K. Peters Ltd, USA

    Book  Google Scholar 

  • Yang R, Gotz D, Hensley J, Towles H, Brown MS (2001) Pixelflex: A reconfigurable multi-projector display system. In: Proceedings visualization, 2001. VIS’01., pp 167–554. IEEE

  • Raij A, Gill G, Majumder A, Towles H, Fuchs H (2003) Pixelflex2: A comprehensive, automatic, casually-aligned multi-projector display. In: IEEE international workshop on projector-camera systems, pp 203–211

  • Bhasker ES, Sinha P, Majumder A (2006) Asynchronous distributed calibration for scalable and reconfigurable multi-projector displays. IEEE Trans Visual Comput Graphics 12(5):1101–1108

    Article  Google Scholar 

  • Sajadi B, Majumder A (2009) Markerless view-independent registration of multiple distorted projectors on extruded surfaces using an uncalibrated camera. IEEE Trans Vis Comput Graphics 15(6):1307–1316

    Article  Google Scholar 

  • Sajadi B, Majumder A (2010) Scalable multi-view registration for multi-projector displays on vertically extruded surfaces. In: Computer graphics forum. Wiley Online Library, vol 29, pp 1063–1072

  • Sajadi B, Majumder A (2011b) Autocalibrating tiled projectors on piecewise smooth vertically extruded surfaces. IEEE Trans Vis Comput Graphics 17(9):1209–1222

    Article  Google Scholar 

  • Raskar R, Brown MS, Yang R, Chen W-C, Welch G, Towles H, Scales B, Fuchs H (1999) Multi-projector displays using camera-based registration. In: Proceedings visualization’99 (Cat. No. 99CB37067), pp 161–522. IEEE

  • Johnson T, Gyarfas F, Skarbez R, Towles H, Fuchs H (2007) A personal surround environment: Projective display with correction for display surface geometry and extreme lens distortion. In: 2007 IEEE virtual reality conference, pp 147–154. IEEE

  • Zhou J, Wang L, Akbarzadeh A, Yang R (2008) Multi-projector display with continuous self-calibration. In: Proceedings of the 5th ACM/IEEE international workshop on projector camera systems, pp 1–7

  • Li F, Sekkati H, Deglint J, Scharfenberger C, Lamm M, Clausi D, Zelek J, Wong A (2017) Simultaneous projector-camera self-calibration for three-dimensional reconstruction and projection mapping. IEEE Trans Comput Imaging 3(1):74–83

    Article  MathSciNet  Google Scholar 

  • Willi S, Grundhöfer A (2017) Robust geometric self-calibration of generic multi-projector camera systems. In: 2017 IEEE International symposium on mixed and augmented reality (ISMAR), pp 42–51. IEEE

  • Tehrani MA, Gopi M, Majumder A (2019) Automated geometric registration for multi-projector displays on arbitrary 3D shapes using uncalibrated devices. IEEE Trans Vis Comput Graphics 27(4):2265–2279

    Article  Google Scholar 

  • Szeliski R et al (2007) Image alignment and stitching: a tutorial. Found Trends® Comput Graphics Vis 2(1):1–104

    Article  MATH  Google Scholar 

  • Richardt C, Tompkin J, Wetzstein G (2020) Capture, reconstruction, and representation of the visual real world for virtual reality. Springer, Cham, pp 3–32. https://doi.org/10.1007/978-3-030-41816-8_1

    Book  Google Scholar 

  • Lee J, Kim B, Kim K, Kim Y, Noh J (2016) Rich360: optimized spherical representation from structured panoramic camera arrays. ACM Trans Graphics 35(4):1–11. https://doi.org/10.1145/2897824.2925983

    Article  Google Scholar 

  • Jiang W, Gu J (2015) Video stitching with spatial-temporal content-preserving warping. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 42–48

  • Anderson R, Gallup D, Barron JT, Kontkanen J, Snavely N, Esteban CH, Agarwal S, Seitz SM (2016) Jump: Virtual reality video. SIGGRAPH Asia

  • Perazzi F, Sorkine-Hornung A, Zimmer H, Kaufmann P, Wang O, Watson S, Gross M (2015) Panoramic video from unstructured camera arrays. In: Computer Graphics Forum. Wiley Online Library, vol 34, pp 57–68

  • Bertel T, Yuan M, Lindroos R, Richardt C (2020) Omniphotos: casual \(360^{\circ }\) vr photography. ACM Trans Graphics 39(6):1–12

    Article  Google Scholar 

  • Im S, Ha H, Rameau F, Jeon H-G, Choe G, Kweon IS (2016) All-around depth from small motion with a spherical panoramic camera. In: European conference on computer vision. Springer, pp 156–172

  • Pintore G, Ganovelli F, Pintus R, Scopigno R, Gobbetti E (2018) 3D floor plan recovery from overlapping spherical images. Computat Vis Media 4(4):367–383

    Article  Google Scholar 

  • Cruz S, Hutchcroft W, Li Y, Khosravan N, Boyadzhiev I, Kang SB (2021) Zillow indoor dataset: annotated floor plans with 360deg panoramas and 3D room layouts. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2133–2143

  • Argyriou L, Economou D, Bouki V (2020) Design methodology for 360 immersive video applications: the case study of a cultural heritage virtual tour. Pers Ubiquit Comput 24(6):843–859

    Article  Google Scholar 

  • Chen SE (1995) Quicktime VR: an image-based approach to virtual environment navigation. In: Proceedings of the 22nd annual conference on computer graphics and interactive techniques, pp 29–38

  • Anguelov D, Dulong C, Filip D, Frueh C, Lafon S, Lyon R, Ogale A, Vincent L, Weaver J (2010) Google street view: capturing the world at street level. Computer 43(6):32–38

    Article  Google Scholar 

  • Rhee T, Petikam L, Allen B, Chalmers A (2017) Mr360: mixed reality rendering for 360 panoramic videos. IEEE Trans Vis Comput Graphics 23(4):1379–1388

    Article  Google Scholar 

  • Moreno D, Taubin G (2012) Simple, accurate, and robust projector-camera calibration. In: 2012 Second international conference on 3d imaging, modeling, processing, visualization & Transmission, pp 464–471. IEEE

  • Sun Z, Zhang Y, Wu Y, Huo D, Qian Y, Wang J (2022) Structured light with redundancy codes. arXiv preprint arXiv:2206.09243

  • Huang Z, Pan X, Pan W, Bian W, Xu Y, Cheung KC, Zhang G, Li H (2022) Neuralmarker: a framework for learning general marker correspondence. ACM Trans Graphics 41(6):1–10

    Article  Google Scholar 

  • Mirdehghan P, Chen W, Kutulakos KN (2018) Optimal structured light a la carte. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6248–6257

  • Hormann K, Lévy B, Sheffer A (2007) Mesh Parameterization: Theory and Practice. This document is the support of a course given at SIGGRAPH 2007. https://hal.inria.fr/inria-00186795

  • Wang Y-S, Lin H-C, Sorkine O, Lee T-Y (2010) Motion-based video retargeting with optimized crop-and-warp. In: ACM SIGGRAPH 2010 Papers, pp 1–9

  • Shih Y, Lai W-S, Liang C-K (2019) Distortion-free wide-angle portraits on camera phones. ACM Trans Graphics 38(4):1–12

    Article  Google Scholar 

  • Liu S, Yuan L, Tan P, Sun J (2013) Bundled camera paths for video stabilization. ACM Trans Graphics 32(4):1–10

    Google Scholar 

  • Lee S, Lee J, Kim B, Kim K, Noh J (2019) Video extrapolation using neighboring frames. ACM Trans Graphics 38(3):1–13. https://doi.org/10.1145/3196492

    Article  Google Scholar 

  • Gallego G, Yezzi A (2015) A compact formula for the derivative of a 3D rotation in exponential coordinates. J Math Imaging Vis 51(3):378–384

    Article  MATH  Google Scholar 

  • Igarashi T, Moscovich T, Hughes JF (2005) As-rigid-as-possible shape manipulation. ACM Trans Graphics 24(3):1134–1141

    Article  Google Scholar 

  • Fernandez-Labrador C, Perez-Yus A, Lopez-Nicolas G, Guerrero JJ (2018) Layouts from panoramic images with geometry and deep learning. IEEE Robot Autom Lett 3(4):3153–3160

    Article  Google Scholar 

  • Li J, Li H, Matsushita Y (2021) Lighting, reflectance and geometry estimation from 360 panoramic stereo. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 10586–10595. IEEE

Download references

Funding

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1C1C1014153), and in part by the MSIT(Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program(IITP-2023-RS-2022-00156360) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jungjin Lee.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, J. Wand: \(360^{\circ }\) video projection mapping using a \(360^{\circ }\) camera. Virtual Reality 27, 2015–2027 (2023). https://doi.org/10.1007/s10055-023-00791-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10055-023-00791-2

Keywords

Navigation