GRAN Is Superior to GraphRNN: Node Orderings, Kernel- and Graph Embeddings-Based Metrics for Graph Generators | SpringerLink
Skip to main content

GRAN Is Superior to GraphRNN: Node Orderings, Kernel- and Graph Embeddings-Based Metrics for Graph Generators

  • Conference paper
  • First Online:
Machine Learning, Optimization, and Data Science (LOD 2023)

Abstract

A wide variety of generative models for graphs have been proposed. They are used in drug discovery, road networks, neural architecture search, and program synthesis. Generating graphs has theoretical challenges, such as isomorphic representations – evaluating how well a generative model performs is difficult. Which model to choose depending on the application domain?

We extensively study kernel-based metrics on distributions of graph invariants and manifold-based and kernel-based metrics in graph embedding space. Manifold-based metrics outperform kernel-based metrics in embedding space. We use these metrics to compare GraphRNN and GRAN, two well-known generative models for graphs, and unveil the influence of node orderings. It shows the superiority of GRAN over GraphRNN - further, our proposed adaptation of GraphRNN with a depth-first search ordering is effective for small-sized graphs.

A guideline on good practices regarding dataset selection and node feature initialization is provided. Our work is accompanied by open-source code and reproducible experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 9380
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 11725
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abboud, R., Ceylan, İ.İ., Grohe, M., Lukasiewicz, T.: The surprising power of graph neural networks with random node initialization (2021)

    Google Scholar 

  2. Barabasi, A.L., Albert, R.: Emergence of scaling in random networks. Science 286(5439), 509–512 (1999)

    Article  MathSciNet  Google Scholar 

  3. Bojchevski, A., Shchur, O., Zügner, D., Günnemann, S.: NetGAN: generating graphs via random walks (2018)

    Google Scholar 

  4. Chia-Cheng Liu, H.C., Luk, K.: Auto-regressive graph generation modeling with improved evaluation methods (2019)

    Google Scholar 

  5. Cui, H., Lu, Z., Li, P., Yang, C.: On positional and structural node features for graph neural networks on non-attributed graphs (2021)

    Google Scholar 

  6. Du, Y., et al.: GraphGT: machine learning datasets for graph generation and transformation. In: NeurIPS 2021 (2021)

    Google Scholar 

  7. Errica, F., Podda, M., Bacciu, D., Micheli, A.: A fair comparison of graph neural networks for graph classification (2020)

    Google Scholar 

  8. Goyal, N., Jain, H.V., Ranu, S.: GraphGen: a scalable approach to domain-agnostic labeled graph generation. In: Proceedings of The Web Conference 2020, pp. 1253–1263 (2020)

    Google Scholar 

  9. Hagberg, A., Swart, P., S Chult, D.: Exploring network structure, dynamics, and function using networkx. Technical report, Los Alamos National Lab. (LANL), Los Alamos, NM (United States) (2008)

    Google Scholar 

  10. Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium (2018)

    Google Scholar 

  11. Kipf, T.N., Welling, M.: Variational graph auto-encoders (2016)

    Google Scholar 

  12. Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., Aila, T.: Improved precision and recall metric for assessing generative models (2019)

    Google Scholar 

  13. Li, Y., Vinyals, O., Dyer, C., Pascanu, R., Battaglia, P.: Learning deep generative models of graphs (2018)

    Google Scholar 

  14. Liao, R., et al.: Efficient graph generation with graph recurrent attention networks. CoRR abs/1910.00760 (2019). http://arxiv.org/abs/1910.00760

  15. Naeem, M.F., Oh, S.J., Uh, Y., Choi, Y., Yoo, J.: Reliable fidelity and diversity metrics for generative models (2020)

    Google Scholar 

  16. O’Bray, L., Horn, M., Rieck, B., Borgwardt, K.: Evaluation metrics for graph generative models: Problems, pitfalls, and practical solutions (2021)

    Google Scholar 

  17. Sato, R., Yamada, M., Kashima, H.: Random features strengthen graph neural networks (2021)

    Google Scholar 

  18. Seitzer, M.: pytorch-fid: FID Score for PyTorch (2020). http://github.com/mseitzer/pytorch-fid. version 0.2.1

  19. Simonovsky, M., Komodakis, N.: GraphVAE: towards generation of small graphs using variational autoencoders (2018)

    Google Scholar 

  20. Stier, J., Granitzer, M.: DeepGG: a deep graph generator. In: Abreu, P.H., Rodrigues, P.P., Fernández, A., Gama, J. (eds.) IDA 2021. LNCS, vol. 12695, pp. 313–324. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-74251-5_25

    Chapter  Google Scholar 

  21. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. CoRR abs/1512.00567 (2015). http://arxiv.org/abs/1512.00567

  22. Thompson, R., Ghalebi, E., Devries, T., Taylor, G.W.: Building LEGO using deep generative models of graphs. ArXiv abs/2012.11543 (2020)

    Google Scholar 

  23. Thompson, R., Knyazev, B., Ghalebi, E., Kim, J., Taylor, G.W.: On evaluation metrics for graph generative models. In: International Conference on Learning Representations (2022). http://openreview.net/forum?id=EnwCZixjSh

  24. Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 440–442 (1998)

    Article  Google Scholar 

  25. Weisfeiler, B., Lehman, A.: A reduction of a graph to a canonical form and an algebra arising during this reduction. Nauchno-Tech. Inform. 2(9), 12–16 (1968)

    Google Scholar 

  26. Xu, K., Hu, W., Leskovec, J., Jegelka, S.: How powerful are graph neural networks? (2019)

    Google Scholar 

  27. You, J., Ying, R., Ren, X., Hamilton, W.L., Leskovec, J.: GraphRNN: a deep generative model for graphs. CoRR abs/1802.08773 (2018). http://arxiv.org/abs/1802.08773

  28. Zeng, Z., Tung, A.K., Wang, J., Feng, J., Zhou, L.: Comparing stars: on approximating graph edit distance. Proc. VLDB Endow. 2(1), 25–36 (2009)

    Article  Google Scholar 

  29. Zwillinger, D., Kokoska, S.: CRC Standard Probability and Statistics Tables and Formulae Sect.14.7. Chapman & Hall/CRC, Boca Raton (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julian Stier .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Touat, O., Stier, J., Portier, PE., Granitzer, M. (2024). GRAN Is Superior to GraphRNN: Node Orderings, Kernel- and Graph Embeddings-Based Metrics for Graph Generators. In: Nicosia, G., Ojha, V., La Malfa, E., La Malfa, G., Pardalos, P.M., Umeton, R. (eds) Machine Learning, Optimization, and Data Science. LOD 2023. Lecture Notes in Computer Science, vol 14505. Springer, Cham. https://doi.org/10.1007/978-3-031-53969-5_32

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-53969-5_32

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-53968-8

  • Online ISBN: 978-3-031-53969-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics