Abstract
In a surrounding projection-based display environment (e.g., a dome theater), the viewers can enjoy a \(360^{\circ }\) video with a strong sense of immersion. Building a thriving immersive environment requires two sophisticated steps. First, to generate a single seamless screen, multiple projectors constituting the surrounding display should be carefully registered to the surface. Second, a \(360^{\circ }\) video should be mapped to the projection area by considering the display surface geometry and a sweet spot (i.e., a reference viewing position) to allow viewers to perceive the correct perspectives. In this study, Wand, a novel system utilizing a consumer \(360^{\circ }\) spherical camera as a calibration device, is proposed to efficiently solve these two issues. Wand first establishes correspondences between the \(360^{\circ }\) camera and projectors using structured light patterns, and then filters out any outliers using heuristic criteria. Next, by assuming that the camera is positioned in a sweet spot, Wand solves the geometric registration of the projectors by formulating it as a simple 2D grid mesh parameterization with the correspondence constraints. Consequently, each projector mesh is directly registered into the spherical coordinates, allowing each projector to easily render a perspective-correct view from a \(360^{\circ }\) video. We applied Wand to various environments of different dimensions and shapes. The results demonstrate that our method can be used to successfully build seamless and immersive displays and provide correct perspectives at a sweet spot.











Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availibility
The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.
Notes
A sweet spot is a reference viewing position where viewers can experience the maximum sense of immersion. It is usually located in the center of a space, but it can be moved according to the creator’s intentions.
References
Mine MR, Van Baar J, Grundhofer A, Rose D, Yang B (2012) Projection-based augmented reality in disney theme parks. Computer 45(7):32–40
Sajadi B, Majumder A (2011a) Automatic registration of multi-projector domes using a single uncalibrated camera. In: Computer graphics forum. Wiley Online Library, vol 30, pp 1161–1170
Cruz-Neira C, Sandin DJ, DeFanti TA (1993) Surround-screen projection-based virtual reality: the design and implementation of the cave. In: Proceedings of the 20th annual conference on computer graphics and interactive techniques, pp 135–142
Lee J, Lee S, Kim Y, Noh J (2016) Screenx: public immersive theatres with uniform movie viewing experiences. IEEE Trans Visual Comput Graphics 23(2):1124–1138
Grundhöfer A, Iwai D (2018) Recent advances in projection mapping algorithms, hardware and applications. In: Computer graphics forum. Wiley Online Library, vol 37, pp 653–675
Park S, Seo H, Cha S, Noh J (2015) Auto-calibration of multi-projector displays with a single handheld camera. In: 2015 IEEE Scientific visualization conference (SciVis), pp 65–72. IEEE
Raskar R, Van Baar J, Willwacher T, Rao S (2004) Quadric transfer for immersive curved screen displays. In: Computer graphics forum. Wiley Online Library, vol 23, pp 451–460
Bimber O, Raskar R (2005) Spatial augmented reality: merging real and virtual worlds. A. K. Peters Ltd, USA
Yang R, Gotz D, Hensley J, Towles H, Brown MS (2001) Pixelflex: A reconfigurable multi-projector display system. In: Proceedings visualization, 2001. VIS’01., pp 167–554. IEEE
Raij A, Gill G, Majumder A, Towles H, Fuchs H (2003) Pixelflex2: A comprehensive, automatic, casually-aligned multi-projector display. In: IEEE international workshop on projector-camera systems, pp 203–211
Bhasker ES, Sinha P, Majumder A (2006) Asynchronous distributed calibration for scalable and reconfigurable multi-projector displays. IEEE Trans Visual Comput Graphics 12(5):1101–1108
Sajadi B, Majumder A (2009) Markerless view-independent registration of multiple distorted projectors on extruded surfaces using an uncalibrated camera. IEEE Trans Vis Comput Graphics 15(6):1307–1316
Sajadi B, Majumder A (2010) Scalable multi-view registration for multi-projector displays on vertically extruded surfaces. In: Computer graphics forum. Wiley Online Library, vol 29, pp 1063–1072
Sajadi B, Majumder A (2011b) Autocalibrating tiled projectors on piecewise smooth vertically extruded surfaces. IEEE Trans Vis Comput Graphics 17(9):1209–1222
Raskar R, Brown MS, Yang R, Chen W-C, Welch G, Towles H, Scales B, Fuchs H (1999) Multi-projector displays using camera-based registration. In: Proceedings visualization’99 (Cat. No. 99CB37067), pp 161–522. IEEE
Johnson T, Gyarfas F, Skarbez R, Towles H, Fuchs H (2007) A personal surround environment: Projective display with correction for display surface geometry and extreme lens distortion. In: 2007 IEEE virtual reality conference, pp 147–154. IEEE
Zhou J, Wang L, Akbarzadeh A, Yang R (2008) Multi-projector display with continuous self-calibration. In: Proceedings of the 5th ACM/IEEE international workshop on projector camera systems, pp 1–7
Li F, Sekkati H, Deglint J, Scharfenberger C, Lamm M, Clausi D, Zelek J, Wong A (2017) Simultaneous projector-camera self-calibration for three-dimensional reconstruction and projection mapping. IEEE Trans Comput Imaging 3(1):74–83
Willi S, Grundhöfer A (2017) Robust geometric self-calibration of generic multi-projector camera systems. In: 2017 IEEE International symposium on mixed and augmented reality (ISMAR), pp 42–51. IEEE
Tehrani MA, Gopi M, Majumder A (2019) Automated geometric registration for multi-projector displays on arbitrary 3D shapes using uncalibrated devices. IEEE Trans Vis Comput Graphics 27(4):2265–2279
Szeliski R et al (2007) Image alignment and stitching: a tutorial. Found Trends® Comput Graphics Vis 2(1):1–104
Richardt C, Tompkin J, Wetzstein G (2020) Capture, reconstruction, and representation of the visual real world for virtual reality. Springer, Cham, pp 3–32. https://doi.org/10.1007/978-3-030-41816-8_1
Lee J, Kim B, Kim K, Kim Y, Noh J (2016) Rich360: optimized spherical representation from structured panoramic camera arrays. ACM Trans Graphics 35(4):1–11. https://doi.org/10.1145/2897824.2925983
Jiang W, Gu J (2015) Video stitching with spatial-temporal content-preserving warping. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 42–48
Anderson R, Gallup D, Barron JT, Kontkanen J, Snavely N, Esteban CH, Agarwal S, Seitz SM (2016) Jump: Virtual reality video. SIGGRAPH Asia
Perazzi F, Sorkine-Hornung A, Zimmer H, Kaufmann P, Wang O, Watson S, Gross M (2015) Panoramic video from unstructured camera arrays. In: Computer Graphics Forum. Wiley Online Library, vol 34, pp 57–68
Bertel T, Yuan M, Lindroos R, Richardt C (2020) Omniphotos: casual \(360^{\circ }\) vr photography. ACM Trans Graphics 39(6):1–12
Im S, Ha H, Rameau F, Jeon H-G, Choe G, Kweon IS (2016) All-around depth from small motion with a spherical panoramic camera. In: European conference on computer vision. Springer, pp 156–172
Pintore G, Ganovelli F, Pintus R, Scopigno R, Gobbetti E (2018) 3D floor plan recovery from overlapping spherical images. Computat Vis Media 4(4):367–383
Cruz S, Hutchcroft W, Li Y, Khosravan N, Boyadzhiev I, Kang SB (2021) Zillow indoor dataset: annotated floor plans with 360deg panoramas and 3D room layouts. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2133–2143
Argyriou L, Economou D, Bouki V (2020) Design methodology for 360 immersive video applications: the case study of a cultural heritage virtual tour. Pers Ubiquit Comput 24(6):843–859
Chen SE (1995) Quicktime VR: an image-based approach to virtual environment navigation. In: Proceedings of the 22nd annual conference on computer graphics and interactive techniques, pp 29–38
Anguelov D, Dulong C, Filip D, Frueh C, Lafon S, Lyon R, Ogale A, Vincent L, Weaver J (2010) Google street view: capturing the world at street level. Computer 43(6):32–38
Rhee T, Petikam L, Allen B, Chalmers A (2017) Mr360: mixed reality rendering for 360 panoramic videos. IEEE Trans Vis Comput Graphics 23(4):1379–1388
Moreno D, Taubin G (2012) Simple, accurate, and robust projector-camera calibration. In: 2012 Second international conference on 3d imaging, modeling, processing, visualization & Transmission, pp 464–471. IEEE
Sun Z, Zhang Y, Wu Y, Huo D, Qian Y, Wang J (2022) Structured light with redundancy codes. arXiv preprint arXiv:2206.09243
Huang Z, Pan X, Pan W, Bian W, Xu Y, Cheung KC, Zhang G, Li H (2022) Neuralmarker: a framework for learning general marker correspondence. ACM Trans Graphics 41(6):1–10
Mirdehghan P, Chen W, Kutulakos KN (2018) Optimal structured light a la carte. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6248–6257
Hormann K, Lévy B, Sheffer A (2007) Mesh Parameterization: Theory and Practice. This document is the support of a course given at SIGGRAPH 2007. https://hal.inria.fr/inria-00186795
Wang Y-S, Lin H-C, Sorkine O, Lee T-Y (2010) Motion-based video retargeting with optimized crop-and-warp. In: ACM SIGGRAPH 2010 Papers, pp 1–9
Shih Y, Lai W-S, Liang C-K (2019) Distortion-free wide-angle portraits on camera phones. ACM Trans Graphics 38(4):1–12
Liu S, Yuan L, Tan P, Sun J (2013) Bundled camera paths for video stabilization. ACM Trans Graphics 32(4):1–10
Lee S, Lee J, Kim B, Kim K, Noh J (2019) Video extrapolation using neighboring frames. ACM Trans Graphics 38(3):1–13. https://doi.org/10.1145/3196492
Gallego G, Yezzi A (2015) A compact formula for the derivative of a 3D rotation in exponential coordinates. J Math Imaging Vis 51(3):378–384
Igarashi T, Moscovich T, Hughes JF (2005) As-rigid-as-possible shape manipulation. ACM Trans Graphics 24(3):1134–1141
Fernandez-Labrador C, Perez-Yus A, Lopez-Nicolas G, Guerrero JJ (2018) Layouts from panoramic images with geometry and deep learning. IEEE Robot Autom Lett 3(4):3153–3160
Li J, Li H, Matsushita Y (2021) Lighting, reflectance and geometry estimation from 360 panoramic stereo. In: 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR), pp 10586–10595. IEEE
Funding
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. NRF-2021R1C1C1014153), and in part by the MSIT(Ministry of Science and ICT), Korea, under the Innovative Human Resource Development for Local Intellectualization support program(IITP-2023-RS-2022-00156360) supervised by the IITP(Institute for Information & communications Technology Planning & Evaluation).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Lee, J. Wand: \(360^{\circ }\) video projection mapping using a \(360^{\circ }\) camera. Virtual Reality 27, 2015–2027 (2023). https://doi.org/10.1007/s10055-023-00791-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10055-023-00791-2