Image-based translucency transfer through correlation analysis over multi-scale spatial color distribution | The Visual Computer Skip to main content
Log in

Image-based translucency transfer through correlation analysis over multi-scale spatial color distribution

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

This paper introduces an image-based material transfer framework which only requires single input and reference images as an ordinary color transfer method. In contrast to previous material transfer methods, we focus on transferring the appearances of translucent objects. There are two challenging problems in such material transfer for translucent objects. First, the appearances of translucent materials are characterized by not only colors but also their spatial distribution. Unfortunately, the traditional color transfer methods can hardly handle the translucency because they only consider the colors of the objects. Second, temporal coherency in the transferred results cannot be handled by the traditional methods and furthermore by recent neural style transfer methods, as we demonstrated in this paper. To address these problems, we propose a novel image-based material transfer method based on the analysis of spatial color distribution. We focus on “subbands,” which represent multi-scale image structures, and find that the correlation between color distribution and subbands is a key feature for reproducing the appearances of translucent materials. Our method relies on a standard principal component analysis, which harmonizes the correlation of input and reference images to reproduce the translucent appearance. Considering the spatial color distribution in the input and reference images, our method can be naturally applied to video sequences in a frame-by-frame manner without any additional pre-/post-process. Through experimental analyses, we demonstrate that the proposed method can be applied to a broad variety of translucent materials, and their resulting appearances are perceptually similar to those of the reference images.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Bonneel, N., Kovacs, B., Paris, S., Bala, K.: Intrinsic decompositions for image editing. Comput. Graph. Forum 36(2), 593–609 (2017)

    Article  Google Scholar 

  2. Bonneel, N., Sunkavalli, K., Tompkin, J., Sun, D., Paris, S., Pfister, H.: Interactive intrinsic video editing. ACM Trans. Graph. 33(6), 197:1–197:10 (2014)

    Article  Google Scholar 

  3. Boyadzhiev, I., Bala, K., Paris, S., Adelson, E.: Band-sifting decomposition for image-based material editing. ACM Trans. Graph. 34(5), 163:1–163:16 (2015)

    Article  Google Scholar 

  4. Carroll, R., Ramamoorthi, R., Agrawala, M.: Illumination decomposition for material recoloring with consistent interreflections. ACM Trans. Graph. 30(4), 43:1–43:10 (2011)

    Article  Google Scholar 

  5. Chadwick, A.C., Cox, G., Smithson, H.E., Kentridge, R.W.: Beyond scattering and absorption: perceptual unmixing of translucent liquids. J. Vis. 18(11):18, 1–15 (2018)

    Google Scholar 

  6. Chen, D., Liao, J., Yuan, L., Yu, N., Hua, G.: Coherent online video style transfer. In: Proceedings of International Conference on Computer Vision 2017 (2017)

  7. Debevec, P., Hawkins, T., Tchou, C., Duiker, H.P., Sarokin, W., Sagar, M.: Acquiring the reflectance field of a human face. In: Proceedings of SIGGRAPH 2000, pp. 145–156. ACM, New York, NY, USA (2000)

  8. Dumoulin, V., Shlens, J., Kudlur, M.: A learned representation for artistic style (2016). arXiv:1610.07629

  9. Figure Eight Inc.: Figure Eight (2018). https://www.figure-eight.com/

  10. Fišer, J., Jamriška, O., Simons, D., Shechtman, E., Lu, J., Asente, P., Lukáč, M., Sýkora, D.: Example-based synthesis of stylized facial animations. ACM Trans. Graph. 36(4), 155:1–155:11 (2017)

    Google Scholar 

  11. Fleming, R.W.: Material perception. Annu. Rev. Vis. Sci. 3, 365–388 (2017)

    Article  Google Scholar 

  12. Fleming, R.W., Bülthoff, H.H.: Low-level image cues in the perception of translucent materials. ACM Trans. Appl. Percept. 2(3), 346–382 (2005)

    Article  Google Scholar 

  13. Gao, C., Gu, D., Zhang, F., Yu, Y.: Reconet: Real-time coherent video style transfer network (2018). arXiv:1807.01197

  14. Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2414–2423 (2016)

  15. Gkioulekas, I., Walter, B., Adelson, E.H., Bala, K., Zickler, T.: On the appearance of translucent edges, In: CVPR 2015, pp. 5528–5536. IEEE (2015)

  16. Gkioulekas, I., Xiao, B., Zhao, S., Adelson, E.H., Zickler, T., Bala, K.: Understanding the role of phase function in translucent appearance. ACM Trans. Graph. 32(5), 147:1–147:19 (2013)

    Article  Google Scholar 

  17. He, K., Sun, J., Tang, X.: Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 6, 1397–1409 (2013)

    Article  Google Scholar 

  18. Heeger, D.J., Bergen, J.R.: Pyramid-based texture analysis/synthesis. In: Proceedings of SIGGRAPH 1995, pp. 229–238. ACM (1995)

  19. Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., Salesin, D.H.: Image analogies. In: Proceedings of SIGGRAPH 2001, pp. 327–340. ACM, New York, NY, USA (2001)

  20. Innamorati, C., Ritschel, T., Weyrich, T., Mitra, N.J.: Decomposing single images for layered photo retouching. Comput. Graph. Forum 36(4), 15–25 (2017)

    Article  Google Scholar 

  21. Jakob, W.: Mitsuba renderer (2010). https://www.mitsuba-renderer.org/

  22. Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: Proceedings of European Conference on Computer Vision, pp. 694–711 (2016)

  23. Khan, E.A., Reinhard, E., Fleming, R.W., Bülthoff, H.H.: Image-based material editing. ACM Trans. Graph. 25(3), 654–663 (2006)

    Article  Google Scholar 

  24. Li, C., Dong, W., Zhou, N., Zhang, X., Paul, J.C.: Translucent material transfer based on single images. In: SIGGRAPH Asia 2011 Sketches, pp. 44:1–44:2. ACM (2011)

  25. Liao, J., Yao, Y., Yuan, L., Hua, G., Kang, S.B.: Visual attribute transfer through deep image analogy. (2017). arXiv:1705.01088

  26. Liu, G., Ceylan, D., Yumer, E., Yang, J., Lien, J.: Material editing using a physically based rendering network (2017). arXiv:1708.00106

  27. Luan, F., Paris, S., Shechtman, E., Bala, K.: Deep photo style transfer (2017). arXiv:1703.07511

  28. Motoyoshi, I.: Highlight-shading relationship as a cue for the perception of translucent and transparent materials. J. Vis. 10(9):6, 1–11 (2010)

    Article  Google Scholar 

  29. Motoyoshi, I., Nishida, S., Sharan, L., Adelson, E.H.: Image statistics and the perception of surface qualities. Nature 447(7141), 206–209 (2007)

    Article  Google Scholar 

  30. Reinhard, E., Ashikhmin, M., Gooch, B., Shirley, P.: Color transfer between images. IEEE Comput. Graph. Appl. 21(5), 34–41 (2001)

    Article  Google Scholar 

  31. Ruder, M., Dosovitskiy, A., Brox, T.: Artistic style transfer for videos and spherical images. Int. J. Comput. Vis. 126(11), 1199–1219 (2018)

    Article  MathSciNet  Google Scholar 

  32. Sawayama, M., Nishida, S.: Material and shape perception based on two types of intensity gradient information. PLoS Comput. Biol. 14(4), e1006061 (2018)

    Article  Google Scholar 

  33. Schmidt, T.W., Pellacini, F., Nowrouzezahrai, D., Jarosz, W., Dachsbacher, C.: State of the art in artistic editing of appearance, lighting and material. Comput. Graph. Forum 35(1), 216–233 (2016)

    Article  Google Scholar 

  34. Sharan, L., Rosenholtz, R., Adelson, E.: Material perception: what can you see in a brief glance? J. Vis. 9(8), 784–784 (2009)

    Article  Google Scholar 

  35. Xiao, B., Walter, B., Gkioulekas, I., Zickler, T.E., Adelson, E.H., Bala, K.: Looking against the light: how perception of translucency depends on lighting direction. J. Vis. 14(3):17, 1–22 (2014)

    Google Scholar 

  36. Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of International Conference on Computer Vision 2017 (2017)

Download references

Acknowledgements

We are thankful to Makoto Okabe, Shin’ya Nishida, Tomoo Yokoyama and the members of dwango CG research for their generous supports and insightful comments. We also thank the anonymous reviewers for their constructive comments.

Funding

This work was supported by JSPS KAKENHI (Grant Number JP15H05924) and Grant-in-Aid for JSPS Fellows (Grant Number 16J02280).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideki Todo.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (zip 34029 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Todo, H., Yatagawa, T., Sawayama, M. et al. Image-based translucency transfer through correlation analysis over multi-scale spatial color distribution. Vis Comput 35, 811–822 (2019). https://doi.org/10.1007/s00371-019-01676-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-019-01676-9

Keywords

Navigation