Virtual mirror rendering with stationary RGB-D cameras and stored 3-D background
- PMID: 23782808
- DOI: 10.1109/TIP.2013.2268941
Virtual mirror rendering with stationary RGB-D cameras and stored 3-D background
Abstract
Mirrors are indispensable objects in our lives. The capability of simulating a mirror on a computer display, augmented with virtual scenes and objects, opens the door to many interesting and useful applications from fashion design to medical interventions. Realistic simulation of a mirror is challenging as it requires accurate viewpoint tracking and rendering, wide-angle viewing of the environment, as well as real-time performance to provide immediate visual feedback. In this paper, we propose a virtual mirror rendering system using a network of commodity structured-light RGB-D cameras. The depth information provided by the RGB-D cameras can be used to track the viewpoint and render the scene from different prospectives. Missing and erroneous depth measurements are common problems with structured-light cameras. A novel depth denoising and completion algorithm is proposed in which the noise removal and interpolation procedures are guided by the foreground/background label at each pixel. The foreground/background label is estimated using a probabilistic graphical model that considers color, depth, background modeling, depth noise modeling, and spatial constraints. The wide viewing angle of the mirror system is realized by combining the dynamic scene, captured by the static camera network with a 3-D background model created off-line, using a color-depth sequence captured by a movable RGB-D camera. To ensure a real-time response, a scalable client-and-server architecture is used with the 3-D point cloud processing, the viewpoint estimate, and the mirror image rendering are all done on the client side. The mirror image and the viewpoint estimate are then sent to the server for final mirror view synthesis and viewpoint refinement. Experimental results are presented to show the accuracy and effectiveness of each component and the entire system.
Similar articles
-
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.Sensors (Basel). 2018 Jan 15;18(1):235. doi: 10.3390/s18010235. Sensors (Basel). 2018. PMID: 29342968 Free PMC article.
-
Temporal and Spatial Denoising of Depth Maps.Sensors (Basel). 2015 Jul 29;15(8):18506-25. doi: 10.3390/s150818506. Sensors (Basel). 2015. PMID: 26230696 Free PMC article.
-
Depth-color fusion strategy for 3-D scene modeling with Kinect.IEEE Trans Cybern. 2013 Dec;43(6):1560-71. doi: 10.1109/TCYB.2013.2271112. IEEE Trans Cybern. 2013. PMID: 24273141
-
Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.Sensors (Basel). 2016 Sep 27;16(10):1589. doi: 10.3390/s16101589. Sensors (Basel). 2016. PMID: 27690028 Free PMC article.
-
A proxy method for real-time 3-DOF haptic rendering of streaming point cloud data.IEEE Trans Haptics. 2013 Jul-Sep;6(3):257-67. doi: 10.1109/TOH.2013.20. IEEE Trans Haptics. 2013. PMID: 24808323
Cited by
-
Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks.Sensors (Basel). 2018 Jul 26;18(8):2430. doi: 10.3390/s18082430. Sensors (Basel). 2018. PMID: 30049979 Free PMC article.
-
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.Sensors (Basel). 2018 Jan 15;18(1):235. doi: 10.3390/s18010235. Sensors (Basel). 2018. PMID: 29342968 Free PMC article.
-
A comparative study of registration methods for RGB-D video of static scenes.Sensors (Basel). 2014 May 15;14(5):8547-76. doi: 10.3390/s140508547. Sensors (Basel). 2014. PMID: 24834909 Free PMC article.
Publication types
LinkOut - more resources
Full Text Sources
Other Literature Sources