MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation | SpringerLink
Skip to main content

MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

3D Gaussian Splatting demonstrates excellent quality and speed in novel view synthesis. Nevertheless, the huge file size of the 3D Gaussians presents challenges for transmission and storage. Current works design compact models to replace the substantial volume and attributes of 3D Gaussians, along with intensive training to distill information. These endeavors demand considerable training time, presenting formidable hurdles for practical deployment. To this end, we propose MesonGS, a codec for post-training compression of 3D Gaussians. Initially, we introduce a measurement criterion that considers both view-dependent and view-independent factors to assess the impact of each Gaussian point on the rendering output, enabling the removal of insignificant points. Subsequently, we decrease the entropy of attributes through two transformations that complement subsequent entropy coding techniques to enhance the file compression rate. More specifically, we first replace rotation quaternions with Euler angles; then, we apply region adaptive hierarchical transform to key attributes to reduce entropy. Lastly, we adopt finer-grained quantization to avoid excessive information loss. Moreover, a well-crafted finetune scheme is devised to restore quality. Extensive experiments demonstrate that MesonGS significantly reduces the size of 3D Gaussians while preserving competitive quality.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 8465
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 10581
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Mip-NeRF 360: Unbounded anti-aliased neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 5470–5479 (2022)

    Google Scholar 

  2. Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., Hedman, P.: Zip-NeRF: anti-aliased grid-based neural radiance fields. In: ICCV (2023)

    Google Scholar 

  3. Bengio, Y., Léonard, N., Courville, A.: Estimating or propagating gradients through stochastic neurons for conditional computation (2013). https://arxiv.org/abs/1308.3432

  4. Chen, A., Xu, Z., Geiger, A., Yu, J., Su, H.: TensoRF: tensorial radiance fields. In: Avidan, S., Brostow, G.J., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision - ECCV 2022 - 17th European Conference, Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part XXXII. Lecture Notes in Computer Science, vol. 13692, pp. 333–350. Springer (2022). https://doi.org/10.1007/978-3-031-19824-3_20

  5. Chen, A., Xu, Z., Wei, X., Tang, S., Su, H., Geiger, A.: Factor fields: a unified framework for neural fields and beyond (2023). https://arxiv.org/abs/2302.01226

  6. Chen, Z., Li, Z., Song, L., Chen, L., Yu, J., Yuan, J., Xu, Y.: NeuRBF: a neural fields representation with adaptive radial basis functions. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 4182–4194 (2023)

    Google Scholar 

  7. Chen, Z., Funkhouser, T.A., Hedman, P., Tagliasacchi, A.: MobileNeRF: exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2023, Vancouver, BC, Canada, June 17-24, 2023, pp. 16569–16578. IEEE (2023). https://doi.org/10.1109/CVPR52729.2023.01590

  8. De Queiroz, R.L., Chou, P.A.: Compression of 3D point clouds using a region-adaptive hierarchical transform. IEEE Trans. Image Process. 25(8), 3947–3956 (2016)

    Article  MathSciNet  Google Scholar 

  9. Deng, C.L., Tartaglione, E.: Compressing explicit voxel grid representations: fast nerfs become also small. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1236–1245 (2023)

    Google Scholar 

  10. Equitz, W.H.: A new vector quantization clustering algorithm. IEEE Trans. Acoust. Speech Signal Process. 37(10), 1568–1575 (1989)

    Article  Google Scholar 

  11. Fan, Z., Wang, K., Wen, K., Zhu, Z., Xu, D., Wang, Z.: LightGaussian: unbounded 3D gaussian compression with 15x reduction and 200+ FPS. arXiv preprint arXiv:2311.17245 (2023)

  12. Fang, G., Hu, Q., Wang, H., Xu, Y., Guo, Y.: 3DAC: learning attribute compression for point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14819–14828 (2022)

    Google Scholar 

  13. Fang, G., Hu, Q., Wang, L., Guo, Y.: ACRF: Compressing explicit neural radiance fields via attribute compression. In: International Conference on Learning Representations(ICLR) (2024)

    Google Scholar 

  14. Frantar, E., Ashkboos, S., Hoefler, T., Alistarh, D.: GPTQ: accurate post-training compression for generative pretrained transformers. In: ICLR (2023)

    Google Scholar 

  15. Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., Kanazawa, A.: Plenoxels: radiance fields without neural networks. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp. 5491–5500. IEEE (2022). https://doi.org/10.1109/CVPR52688.2022.00542

  16. Gailly, J.l., Adler, M.: ZLIB general purpose compression library. user manual zlib version 1(4) (2003)

    Google Scholar 

  17. Girish, S., Gupta, K., Shrivastava, A.: Eagles: efficient accelerated 3D gaussians with lightweight encodings. arXiv preprint arXiv:2312.04564 (2023)

  18. Girish, S., Shrivastava, A., Gupta, K.: SHACIRA: scalable hash-grid compression for implicit neural representations. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17513–17524 (2023)

    Google Scholar 

  19. Hedman, P., et al.: Deep blending for free-viewpoint image-based rendering. ToG (2018)

    Google Scholar 

  20. Hedman, P., Srinivasan, P.P., Mildenhall, B., Barron, J.T., Debevec, P.E.: Baking neural radiance fields for real-time view synthesis. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021, pp. 5855–5864. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.00582

  21. Hu, W., et al.: Tri-MipRF: tri-Mip representation for efficient anti-aliasing neural radiance fields. In: ICCV (2023)

    Google Scholar 

  22. Huffman, D.A.: A method for the construction of minimum-redundancy codes. Proc. IRE 40(9), 1098–1101 (1952)

    Article  Google Scholar 

  23. Kerbl, B., Kopanas, G., Leimkühler, T., Drettakis, G.: 3D gaussian splatting for real-time radiance field rendering. ACM Trans. Graph. (ToG) 42(4), 1–14 (2023)

    Article  Google Scholar 

  24. Knapitsch, A., et al. Tanks and temples: benchmarking large-scale scene reconstruction. ToG (2017)

    Google Scholar 

  25. Lee, J.C., Rho, D., Sun, X., Ko, J.H., Park, E.: Compact 3D gaussian representation for radiance field. arXiv preprint arXiv:2311.13681 (2023)

  26. Li, L., Shen, Z., Wang, Z., Shen, L., Bo, L.: Compressing volumetric radiance fields to 1 mb. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4222–4231 (2023)

    Google Scholar 

  27. Mahmoud, O., Ladune, T., Gendrin, M.: CAwa-NeRF: instant learning of compression-aware nerf features. arXiv preprint arXiv:2310.14695 (2023)

  28. Meagher, D.: Geometric modeling using octree encoding. Comput. Graph. Image Process. 19(2), 129–147 (1982)

    Article  Google Scholar 

  29. Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: representing scenes as neural radiance fields for view synthesis. Commun. ACM 65(1), 99–106 (2021)

    Article  Google Scholar 

  30. Morgenstern, W., Barthel, F., Hilsmann, A., Eisert, P.: Compact 3D scene representation via self-organizing gaussian grids. arXiv preprint arXiv:2312.13299 (2023)

  31. Müller, T., Evans, A., Schied, C., Keller, A.: Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 41(4), 1–15 (2022)

    Article  Google Scholar 

  32. Navaneet, K., Meibodi, K.P., Koohpayegani, S.A., Pirsiavash, H.: Compact3D: compressing gaussian splat radiance field models with vector quantization. arXiv preprint arXiv:2311.18159 (2023)

  33. Niedermayr, S., Stumpfegger, J., Westermann, R.: Compressed 3D gaussian splatting for accelerated novel view synthesis. In: CVPR (2024)

    Google Scholar 

  34. Papantonakis, P., Kopanas, G., Kerbl, B., Lanvin, A., Drettakis, G.: Reducing the memory footprint of 3D gaussian splatting. Proc. ACM Comput. Graph. Interact. Tech. 7(1), 1–17 (2024). https://repo-sam.inria.fr/fungraph/reduced-3dgs/

  35. Pranckevičius, A.: https://aras-p.info/blog/2023/09/13/Making-Gaussian-Splats-smaller/ (2023). Accessed 28 Oct 2023

  36. Pranckevičius, A.: https://aras-p.info/blog/2023/09/27/Making-Gaussian-Splats-more-smaller/ (2023). Accessed 28 Oct 2023

  37. Qin, M., Li, W., Zhou, J., Wang, H., Pfister, H.: LangSplat: 3D language gaussian splatting. arXiv preprint arXiv:2312.16084 (2023)

  38. Reiser, C., et al.: MERF: memory-efficient radiance fields for real-time view synthesis in unbounded scenes. ACM Trans. Graph. 42(4), 1–12 (2023). https://doi.org/10.1145/3592426

  39. Rho, D., Lee, B., Nam, S., Lee, J.C., Ko, J.H., Park, E.: Masked wavelet representation for compact neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 20680–20690 (June 2023)

    Google Scholar 

  40. Richardson, I.E.: H. 264 and MPEG-4 Video Compression: Video Coding for Next-generation Multimedia. John Wiley & Sons (2004)

    Google Scholar 

  41. Schwarz, S., Preda, M., Baroncini, V., Budagavi, M., Cesar, P., Chou, P.A., Cohen, R.A., Krivokuća, M., Lasserre, S., Li, Z., et al.: Emerging mpeg standards for point cloud compression. IEEE J. Emerg. Sel. Top. Circuits Syst. 9(1), 133–148 (2018)

    Article  Google Scholar 

  42. Sculley, D.: Web-scale k-means clustering. In: Proceedings of the 19th International Conference on World Wide Web. pp. 1177–1178. Association for Computing Machinery, New York, USA (2010). https://doi.org/10.1145/1772690.1772862

  43. Shin, S., Park, J.: Binary radiance fields. arXiv preprint arXiv:2306.07581 (2023)

  44. Song, R., Fu, C., Liu, S., Li, G.: Efficient hierarchical entropy model for learned point cloud compression. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14368–14377 (2023)

    Google Scholar 

  45. Sun, C., Sun, M., Chen, H.T.: Direct voxel grid optimization: super-fast convergence for radiance fields reconstruction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5459–5469 (2022)

    Google Scholar 

  46. Takikawa, T., et al.: Variable bitrate neural fields. In: Nandigjav, M., Mitra, N.J., Hertzmann, A. (eds.) SIGGRAPH: special Interest Group on Computer Graphics and Interactive Techniques Conference, Vancouver, BC, Canada, pp. 1–9. ACM (2022). https://doi.org/10.1145/3528233.3530727

  47. Tang, C., et al.: Mixed-precision neural network quantization via learned layer-wise importance. In: European Conference on Computer Vision (2022)

    Google Scholar 

  48. Wang, L., et al.: Neural residual radiance fields for streamably free-viewpoint videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 76–87 (2023)

    Google Scholar 

  49. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  50. Wei, X., Zhang, Y., Li, Y., Zhang, X., Gong, R., Guo, J., Liu, X.: Outlier suppression+: accurate quantization of large language models by equivalent and effective shifting and scaling. In: Bouamor, H., Pino, J., Bali, K. (eds.) Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pp. 1648–1665. Association for Computational Linguistics, Singapore (December 2023). https://doi.org/10.18653/v1/2023.emnlp-main.102, https://aclanthology.org/2023.emnlp-main.102

  51. Weinberger, M.J., Seroussi, G., Sapiro, G.: The loco-i lossless image compression algorithm: principles and standardization into jpeg-ls. IEEE Trans. Image Process. 9(8), 1309–1324 (2000)

    Article  Google Scholar 

  52. Witten, I.H., Neal, R.M., Cleary, J.G.: Arithmetic coding for data compression. Commun. ACM 30(6), 520–540 (1987)

    Article  Google Scholar 

  53. Xu, Q., et al.: Point-nerf: point-based neural radiance fields. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5438–5448 (2022)

    Google Scholar 

  54. Yifan, W., Serena, F., Wu, S., Öztireli, C., Sorkine-Hornung, O.: Differentiable surface splatting for point-based geometry processing. ACM Trans. Graph. (TOG) 38(6), 1–14 (2019)

    Article  Google Scholar 

  55. Zhang, C., Florencio, D., Loop, C.: Point cloud attribute compression with graph transform. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 2066–2070. IEEE (2014)

    Google Scholar 

  56. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

    Google Scholar 

  57. Zhao, T., Chen, J., Leng, C., Cheng, J.: TinyNeRF: towards 100 x compression of voxel radiance fields. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 3588–3596 (2023)

    Google Scholar 

  58. Zhao, Y., et al.: Atom: low-bit quantization for efficient and accurate LLM serving. In: Gibbons, P., Pekhimenko, G., Sa, C.D. (eds.) Proceedings of Machine Learning and Systems, vol. 6, pp. 196–209 (2024). https://proceedings.mlsys.org/paper_files/paper/2024/file/5edb57c05c81d04beb716ef1d542fe9e-Paper-Conference.pdf

  59. Ziv, J., Lempel, A.: A universal algorithm for sequential data compression. IEEE Trans. Inf. Theory 23(3), 337–343 (1977)

    Article  MathSciNet  Google Scholar 

  60. Ziv, J., Lempel, A.: Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory 24(5), 530–536 (1978)

    Article  MathSciNet  Google Scholar 

  61. Zwicker, M., Pfister, H., van Baar, J., Gross, M.: EWA volume splatting. In: Proceedings Visualization, 2001. VIS ’01, pp. 29–538 (2001). https://doi.org/10.1109/VISUAL.2001.964490

  62. , Zwicker, M., Pfister, H., Van Baar, J., Gross, M.: Surface splatting. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, pp. 371–378 (2001)

    Google Scholar 

  63. Červený, J.: https://gsplat.tech (2023). Accessed 28 Oct 2023

Download references

Acknowledgments

This work is supported in part by National Key Research and Development Project of China (Grant No. 2023YFF0905502) and Shenzhen Science and Technology Program (Grant No. JCYJ20220818101014030). We thank anonymous reviewers for their valuable advice and JiangXingAI for sponsoring the research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhi Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 454 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Xie, S. et al. (2025). MesonGS: Post-training Compression of 3D Gaussians via Efficient Attribute Transformation. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15091. Springer, Cham. https://doi.org/10.1007/978-3-031-73414-4_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73414-4_25

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73413-7

  • Online ISBN: 978-3-031-73414-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics