Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction | SpringerLink
Skip to main content

Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Multi-view 3D surface reconstruction using neural implicit representations has made notable progress by modeling the geometry and view-dependent radiance fields within a unified framework. However, their effectiveness in reconstructing objects with specular or complex surfaces is typically biased by the directional parameterization used in their view-dependent radiance network. Viewing direction and reflection direction are the two most commonly used directional parameterizations but have their own limitations. Typically, utilizing the viewing direction usually struggles to correctly decouple the geometry and appearance of objects with highly specular surfaces, while using the reflection direction tends to yield overly smooth reconstructions for concave or complex structures. In this paper, we analyze their failed cases in detail and propose a novel hybrid directional parameterization to address their limitations in a unified form. Extensive experiments demonstrate the proposed hybrid directional parameterization consistently delivered satisfactory results in reconstructing objects with a wide variety of materials, geometry and appearance, whereas using other directional parameterizations faces challenges in reconstructing certain objects. Moreover, the proposed hybrid directional parameterization is nearly parameter-free and can be effortlessly applied in any existing neural surface reconstruction method.

Z. Jiang and T. Xu—Equal contribution.

Work done while Zijie Jiang interned at Preferred Networks, Inc.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    If the capacity of the radiance network is sufficiently high, even for a fixed geometry such as a sphere, the image reconstruction loss can converge to zero [40].

References

  1. Aanæs, H., Jensen, R.R., Vogiatzis, G., Tola, E., Dahl, A.B.: Large-scale data for multiple-view stereopsis. IJCV 120, 153–168 (2016)

    Article  MathSciNet  Google Scholar 

  2. Cai, B., Huang, J., Jia, R., Lv, C., Fu, H.: NeuDA: neural deformable anchor for high-fidelity implicit surface reconstruction. In: CVPR (2023)

    Google Scholar 

  3. Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: CVPR (2019)

    Google Scholar 

  4. Community, B.O.: Blender - A 3D Modelling and Rendering Package. Blender Foundation, Stichting Blender Foundation, Amsterdam (2018)

    Google Scholar 

  5. Darmon, F., Bascle, B., Devaux, J., Monasse, P., Aubry, M.: Improving neural implicit surfaces geometry with patch warping. In: CVPR (2022)

    Google Scholar 

  6. Fu, Q., Xu, Q., Ong, Y.S., Tao, W.: Geo-Neus: geometry-consistent neural implicit surfaces learning for multi-view reconstruction. In: NeurIPS (2022)

    Google Scholar 

  7. Furukawa, Y., et al.: Multi-view stereo: a tutorial. Found. Trends® Comput. Graph. Vision 9(1-2), 1–148 (2015)

    Google Scholar 

  8. Ge, W., Hu, T., Zhao, H., Liu, S., Chen, Y.C.: Ref-NeuS: ambiguity-reduced neural implicit surface learning for multi-view reconstruction with reflection. In: ICCV (2023)

    Google Scholar 

  9. Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.: Baking neural radiance fields for real-time view synthesis. In: CVPR (2021)

    Google Scholar 

  10. Kato, H., et al.: Differentiable rendering: a survey. arXiv preprint arXiv:2006.12057 (2020)

  11. Li, T.M., Aittala, M., Durand, F., Lehtinen, J.: Differentiable monte Carlo ray tracing through edge sampling. ACM Trans. Graph. 37(6), 1–11 (2018)

    Article  Google Scholar 

  12. Li, Z., et al.: Neuralangelo: high-fidelity neural surface reconstruction. In: CVPR (2023)

    Google Scholar 

  13. Liang, R., Chen, H., Li, C., Chen, F., Panneer, S., Vijaykumar, N.: ENVIDR: implicit differentiable renderer with neural environment lighting. arXiv preprint arXiv:2303.13022 (2023)

  14. Liu, Y., et al.: Nero: Neural geometry and BRDF reconstruction of reflective objects from multiview images. ACM Trans. Graph. (2023)

    Google Scholar 

  15. Long, X., Lin, C., Wang, P., Komura, T., Wang, W.: SparseNeuS: fast generalizable neural surface reconstruction from sparse views. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13692, pp. 210–227. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19824-3_13

    Chapter  Google Scholar 

  16. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: CVPR (2019)

    Google Scholar 

  17. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NERF: representing scenes as neural radiance fields for view synthesis. In: ECCV (2020)

    Google Scholar 

  18. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. 41(4), 1–15 (2022)

    Article  Google Scholar 

  19. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Occupancy flow: 4D reconstruction by learning particle dynamics. In: ICCV (2019)

    Google Scholar 

  20. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: learning implicit 3D representations without 3D supervision. In: CVPR (2020)

    Google Scholar 

  21. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: learning implicit 3D representations without 3D supervision. In: CVPR (2020)

    Google Scholar 

  22. Oechsle, M., Peng, S., Geiger, A.: UNISURF: unifying neural implicit surfaces and radiance fields for multi-view reconstruction. In: ICCV (2021)

    Google Scholar 

  23. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: CVPR (2019)

    Google Scholar 

  24. Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., Geiger, A.: Convolutional occupancy networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 523–540. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_31

    Chapter  Google Scholar 

  25. Ravi, N., et al.: Accelerating 3D deep learning with PyTorch3D. arXiv preprint arXiv:2007.08501 (2020)

  26. Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: NeRV: neural reflectance and visibility fields for relighting and view synthesis. In: CVPR (2021)

    Google Scholar 

  27. Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. In: CVPR (2017)

    Google Scholar 

  28. Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., Srinivasan, P.P.: Ref-NERF: structured view-dependent appearance for neural radiance fields. In: CVPR (2022)

    Google Scholar 

  29. Wang, P., Liu, L., Liu, Y., Theobalt, C., Komura, T., Wang, W.: NeuS: learning neural implicit surfaces by volume rendering for multi-view reconstruction. In: NeurIPS (2021)

    Google Scholar 

  30. Wang, Y., Han, Q., Habermann, M., Daniilidis, K., Theobalt, C., Liu, L.: NeuS2: fast learning of neural implicit surfaces for multi-view reconstruction. In: ICCV (2023)

    Google Scholar 

  31. Wu, T., et al.: Voxurf: voxel-based efficient and accurate neural surface reconstruction. In: ICLR (2023)

    Google Scholar 

  32. Xie, Y., et al.: Neural fields in visual computing and beyond. Comput. Graph. Forum 41(2), 641–676 (2022)

    Article  Google Scholar 

  33. Yan, X., Yang, J., Yumer, E., Guo, Y., Lee, H.: Perspective transformer nets: Learning single-view 3D object reconstruction without 3D supervision. In: NeurIPS (2016)

    Google Scholar 

  34. Yao, Y., et al.: NeILF: neural incident light field for material and lighting estimation. In: ECCV (2022)

    Google Scholar 

  35. Yariv, L., Gu, J., Kasten, Y., Lipman, Y.: Volume rendering of neural implicit surfaces. In: NeurIPS (2021)

    Google Scholar 

  36. Yariv, L., et al.: Multiview neural surface reconstruction by disentangling geometry and appearance. In: NeurIPS (2020)

    Google Scholar 

  37. Yu, A., Ye, V., Tancik, M., Kanazawa, A.: pixelNeRF: neural radiance fields from one or few images. In: CVPR (2021)

    Google Scholar 

  38. Yu, Z., Peng, S., Niemeyer, M., Sattler, T., Geiger, A.: MonoSDF: exploring monocular geometric cues for neural implicit surface reconstruction. In: NeurIPS (2022)

    Google Scholar 

  39. Zhang, J., et al.: Neilf++: Inter-reflectable light fields for geometry and material estimation (2023)

    Google Scholar 

  40. Zhang, K., Riegler, G., Snavely, N., Koltun, V.: NeRF++: analyzing and improving neural radiance fields. arXiv preprint arXiv:2010.07492 (2020)

  41. Zhang, X., Srinivasan, P.P., Deng, B., Debevec, P., Freeman, W.T., Barron, J.T.: NeRFactor: neural factorization of shape and reflectance under an unknown illumination. ACM Trans. Graph. 40(6), 1–18 (2021)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zijie Jiang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 4918 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, Z., Xu, T., Kato, H. (2025). Rethinking Directional Parameterization in Neural Implicit Surface Reconstruction. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15132. Springer, Cham. https://doi.org/10.1007/978-3-031-72904-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-72904-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-72903-4

  • Online ISBN: 978-3-031-72904-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics