Abstract
Point clouds, as the most prevalent representation of 3D data, are inherently disordered, unstructured, and discrete. Feature extraction from point clouds can be challenging, as objects with similar styles may be misclassified, and uncertain backgrounds or noise can significantly impact the performance of traditional classification models. To address these challenges, we introduce StyleContrast, a novel contrastive learning algorithm for style fusion. This approach effectively fuses styles of point clouds belonging to the same category across different domain datasets at the feature level, thus fulfilling the need for data enhancement. By aligning point clouds with their corresponding style-fused point clouds in the feature space, StyleContrast allows the feature extractor to learn style-independent invariant features. Moreover, our method incorporates category-centric contrastive loss to differentiate between similar objects from different categories. Experimental results demonstrate that StyleContrast achieves superior performance on Modelnet40, ShapenetPart, and ScanObjectNN, surpassing all existing methods in terms of classification accuracy. Ablation experiments further confirm that our approach excels in point cloud feature analysis.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abhinav, U., Alpana, D., Kuriakose, S.-M., Mahato, D.: 3DSTNet: neural 3D shape style transfer. In: 2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 1–6. IEEE (2022)
Afham, M., Dissanayake, I., Dissanayake, D., Dharmasiri, A., Thilakarathna, K., Rodrigo, R.: CrossPoint: self-supervised cross-modal contrastive learning for 3D point cloud understanding. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9892–9902. IEEE (2022)
Chang, A.-X., et al.: ShapeNet: an information-rich 3D model repository. CoRR abs/1512.03012 (2015). arxiv.org/abs/1512.03012
Chen, T., Kornblith, S., Norouzi, M., Geoffrey, H.: A simple framework for contrastive learning of visual representations. In: The 37th International Conference on Machine Learning, pp. 1597–1607 (2020)
Deng, J., Dong, W., Socher, R., Li, L.-J., Kai, L., Li, F.-F.: ImageNet: a large-scale hierarchical image database. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. IEEE (2009)
Guo, M., Cai, J., Liu, Z., Mu, T., Martin, R., Hu, S.: PCT: point cloud transformer. Comput. Vis. Media 7(2), 187–199 (2021)
He, K.-M., Fan, H.-Q., Wu, Y.-X., Xie, S.-N., Girshick, R.-B.: Momentum contrast for unsupervised visual representation learning. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9726–9735. IEEE (2020)
Isola, P., Zhu, J.-Y., Zhou, T.-H., Efros, A.-A.: Image-to-image translation with conditional adversarial networks. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1125–1134. IEEE (2017)
Jiang, L., et al.: Guided point contrastive learning for semi-supervised point cloud semantic segmentation. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 6403–6412. IEEE (2021)
Laurens, V.-M., Geoffrey, E.-H.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2021)
Li, R.-H., Li, X.-Z., Heng, P.-A., Fu, C.-W.: PointAugment: an auto-augmentation framework for point cloud classification. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6377–6386. IEEE (2020)
Li, Y., Bu, R., Sun, M., Wu, W., Di, X., Chen, B.: PointCNN: convolution on X-transformed points. In: NeurIPS, vol. 31. Curran Associates (2018)
Lin, M.-X., et al.: Single image 3D shape retrieval via cross-modal instance and category contrastive learning. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 11385–11395. IEEE (2021)
Liu, Z., Hu, H., Cao, Y., Zhang, Z., Tong, X.: A closer look at local aggregation operators in point cloud analysis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 326–342. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_20
Lun, Z.-L., Kalogerakis, E., Sheffer, A.: Elements of style: learning perceptual shape style similarity. ACM Trans. Graph. (TOG) 34(4), 1–14 (2015)
Nazir, D., Afzal, M.-Z., Pagani, A., Liwicki, M., Stricker, D.: Contrastive learning for 3D point clouds classification and shape completion. Sensors 21(21), 7392 (2021)
Oord, A., Li, Y.-Z., Vinyals, O.: Representation learning with contrastive predictive coding. CoRR abs/1807.03748 (2018). arxiv.org/abs/1807.03748
Qi, C.-R., Su, H., Mo, K., Guibas, L.-J.: PointNet: deep learning on point sets for 3D classification and segmentation. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77–85. IEEE (2017)
Qi, C.-R., Yi, L., Su, H., Guibas, L.-J.: PointNet++: deep hierarchical feature learning on point sets in a metric space. In: NIPS, vol. 30, pp. 5099–5108 (2017)
Sanghi, A.: Info3D: representation learning on 3D objects using mutual information maximization and contrastive learning. CoRR abs/2006.02598 (2020). arxiv.org/abs/2006.02598
Snell, J., Swersky, K., Zemel, R.-S.: Prototypical networks for few-shot learning. CoRR abs/1703.05175 (2017). arxiv.org/abs/1703.05175
Sun, C., Zheng, Z., Wang, X., Xu, M., Yang, Y.: Self-supervised point cloud representation learning via separating mixed shapes. IEEE Trans. Multimedia, 1–11 (2022)
Uy, M.-A., Pham, Q.-H., Hua, B.-S., Nguyen, D.-T., Yeung, S.K.: Revisiting point cloud classification: a new benchmark dataset and classification model on real-world data. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 1588–1597. IEEE (2019)
Wang, Y., Sun, Y.-B., Liu, Z.-W., Sarma, S.-E., Michael, M.-B., Justin, M.-S.: Dynamic graph CNN for learning on point clouds. ACM Trans. Graph. (TOG) 38(5), 1–12 (2019)
Wu, W.-X., Qi, Z.-G., Li, F.-X.: PointConv: deep convolutional networks on 3D point clouds. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9613–9622. IEEE (2019)
Wu, Z.-J., Wang, X., Lin, D., Lischinski, D., Cohen-Or, D., Huang, H.: Structure-aware generative network for 3D-shape modeling. ACM Trans. Graph. (TOG) 38(4), 1–14 (2019)
Wu, Z.-R., et al.: 3D ShapeNets: a deep representation for volumetric shapes. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1912–1920. IEEE (2015)
Xie, S.-N., Gu, J.-T., Guo, D.-M., Qi, C., Guibas, L.-J., Litany, O.: PointContrast: unsupervised pre-training for 3D point cloud understanding. CoRR abs/2007.10985 (2020). arxiv.org/abs/2007.10985
Yin, K., Chen, Z.-Q., Huang, H., Cohen-Or, D., Zhang, H.: LOGAN: unpaired shape transform in latent overcomplete space. ACM Trans. Graph. (TOG) 38(6), 1–13 (2019)
Zhang, J., et al.: PointCutMix: regularization strategy for point cloud classification. Neurocomputing 505, 58–67 (2022)
Zheng, W., Tang, W.-L., Jiang, L., Fu, C.-W.: SE-SSD: self-ensembling single-stage object detector from point cloud. In: The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 14489–14498. IEEE (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, R., Own, CM. (2023). Enhanced Point Cloud Interpretation via Style Fusion and Contrastive Learning in Advanced 3D Data Analysis. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_29
Download citation
DOI: https://doi.org/10.1007/978-3-031-44207-0_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44206-3
Online ISBN: 978-3-031-44207-0
eBook Packages: Computer ScienceComputer Science (R0)