Content-Based Pseudoscopic View Detection | Journal of Signal Processing Systems Skip to main content
Log in

Content-Based Pseudoscopic View Detection

  • Published:
Journal of Signal Processing Systems Aims and scope Submit manuscript

Abstract

Stereoscopic images are generated from a pair of images (i.e., left and right images). In order to generate 3-D perception using the left and right images, it should be guaranteed that each image is perceived by the corresponding eye only. However, the depth perception becomes distorted when the left and the right eye views are interchanged, also known as a pseudoscopic problem. In this paper, we propose a novel method for detecting the pseudoscopic view by using disparity comparison in stereo images. Our approach originates from the idea that the disparities on a scene are categorized into three classes: zero disparity, positive disparity, and negative disparity, and that the foreground is usually located in front of the background. The proposed pseudoscopic view detection system consists of three sequential stages: 1) foreground/background segmentation, 2) feature points extraction, and 3) disparity comparison. We first segment the given image into two layers (i.e., foreground and background). Then, the feature points at each layer are extracted and matched to estimate the disparity characteristics of each layer. Finally, the existence of the pseudoscopic view can be investigated by using a disparity calibration model (DCM) presented in this paper and comparing the sign and magnitude of the average disparity of selected matching points set at each layer. Experimental results on various stereoscopic video sequences show that the proposed method is a useful and efficient approach in detecting the pseudoscopic view stereo images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10

Similar content being viewed by others

References

  1. Hartley, R., Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge University Press.

  2. Woods, A., Docherty, T., & Koch, R. (1993). Image distortions in stereoscopic video systems. Proceedings of SPIE, 1915, 36–48.

    Article  Google Scholar 

  3. Onishi, T., Taniguchi, H., Kobayashi, T. (2001). 3D color animation display using light ray flux method. Proceedings of 3D Conference, 173–176.

  4. Park, J. Kim, H., Kim, Y., Kim, J., Hong, J., Lee, S., et al. (2004). Depth- and viewing angle-enhancement three-dimensional two-dimensional convertible display based on integral imaging. Proceedings of International Display Workshop, 1455–1458.

  5. Okano, F., Arai, J., Mitani, K., & Skui, M. (2006). Real-time integral imaging based on extremely high resolution video system. Proceedings of the IEEE, 94(3), 490–501.

    Article  Google Scholar 

  6. Cordes, K., Mikulastik, P., Vais, A., Ostermann, J. (2009). Extrinsic calibration of a stereo camera system using a 3D CAD model considering the uncertainties of estimated feature points. Proceedings of Conference for Visual Media Production, 135–143.

  7. Liu, R., Zhang, H., Liu, M., Xia, X., & Hu, T. (2009). Stereo camera self-calibration based on SIFT. International Conference on Measuring Technology and Mechatronics Automation, 1, 352–355.

    Article  Google Scholar 

  8. Furukawa, Y., Ponce, J. (2008). Accurate camera calibration from multi-view stereo and bundle adjustment. International Conference on Computer Vision and Pattern Recognition (CVPR), 1–8

  9. Iwahori, Y., Watanabe, Y., Woodham, R. J., & Iwata, A. (2002). Self-calibration and neural network implementation of photometric stereo. International Conference on Pattern Recognition (ICPR), 4, 359–362.

    Google Scholar 

  10. Lowe, D. (2004). Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.

    Article  Google Scholar 

  11. Jung. C., Kim. B., Kim. C. (2010). Automatic segmentation of salient objects using iterative reversible graph cut. International Conference on Multimedia & Expo (ICME), 590–595

  12. Hou, X., Zhang, L. (2007). Saliency detection: A spectral residual approach. International Conference on Computer Vision and Pattern Recognition (CVPR), 1–8.

  13. Fu, Y., Cheng, J., Li, Z., Lu, H. (2008). Saliency cuts: An automatic approach to object segmentation. International Conference on Pattern Recognition (ICPR), 1–4.

  14. Pike, J. M., Harris, C. G. (1988). A Combined Corner and Edge Detector. Proceedings of Alvey Vision Conference, pp. 147–151.

  15. Bay, H., Ess, A., Tuytelaars, T., & Gool, L. V. (2008). SURF: Speeded up robust features. Computer Vision and Image Understanding (CVIU), 110(3), 346–359.

    Article  Google Scholar 

  16. http://research.microsoft.com/en-us/um/people/sbkang/3dvideodownload/

  17. http://sp.cs.tut.fi/mobile3dtv/stereo-video/

  18. Zhang, L., & Tam, W. J. (2005). Stereoscopic image generation based on depth images for 3DTV. IEEE Transactions on Broadcasting, 51(2), 191–199.

    Article  Google Scholar 

  19. Dang, T., Hoffmann, C., & Stiller, C. (2009). Continuous stereo self-calibration by camera parameter tracking. IEEE Transactions on Image Processing, 18(7), 1536–1550.

    Article  MathSciNet  Google Scholar 

  20. Granshaw, S. (1980). Bundle adjustment methods in engineering photogrammetry. International Journal of Photogramm, 10(56), 181–207.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by the MKE(The Ministry of Knowledge Economy), Korea, under the ITRC(Information Technology Research Center) support program supervised by the NIPA(National IT Industry Promotion Agency) (NIPA-2011-(C1090-1111-0003)).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Changick Kim.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lee, J., Jung, C., Kim, C. et al. Content-Based Pseudoscopic View Detection. J Sign Process Syst 68, 261–271 (2012). https://doi.org/10.1007/s11265-011-0608-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11265-011-0608-8

Keywords

Navigation