Distributed constrained combinatorial optimization leveraging hypergraph neural networks | Nature Machine Intelligence
Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Distributed constrained combinatorial optimization leveraging hypergraph neural networks

A preprint version of the article is available at arXiv.

Abstract

Scalable addressing of high-dimensional constrained combinatorial optimization problems is a challenge that arises in several science and engineering disciplines. Recent work introduced novel applications of graph neural networks for solving quadratic-cost combinatorial optimization problems. However, effective utilization of models such as graph neural networks to address general problems with higher-order constraints is an unresolved challenge. This paper presents a framework, HypOp, that advances the state of the art for solving combinatorial optimization problems in several aspects: (1) it generalizes the prior results to higher-order constrained problems with arbitrary cost functions by leveraging hypergraph neural networks; (2) it enables scalability to larger problems by introducing a new distributed and parallel training architecture; (3) it demonstrates generalizability across different problem formulations by transferring knowledge within the same hypergraph; (4) it substantially boosts the solution accuracy compared with the prior art by suggesting a fine-tuning step using simulated annealing; and (5) it shows remarkable progress on numerous benchmark examples, including hypergraph MaxCut, satisfiability and resource allocation problems, with notable run-time improvements using a combination of fine-tuning and distributed training techniques. We showcase the application of HypOp in scientific discovery by solving a hypergraph MaxCut problem on a National Drug Code drug-substance hypergraph. Through extensive experimentation on various optimization problems, HypOp demonstrates superiority over existing unsupervised-learning-based solvers and generic optimization methods.

This is a preview of subscription content, access via your institution

Access options

Buy this article

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: HypOp methods.
Fig. 2: HypOp overview.
Fig. 3: HypOp versus SA and Adam.
Fig. 4: HypOp versus PI-GNN.

Similar content being viewed by others

Data availability

In this paper, we used publicly available datasets of the American Physical Society26, NDC29, Gset27 and SATLIB31, together with synthetic hypergraphs and graphs. The procedure under which the synthetic hypergraphs and graphs are generated is explained throughout the paper. Some examples of the synthetic hypergraphs are provided with the code at ref. 32.

Code availability

The code has been made publicly available at ref. 32. We used Python v.3.8 together with the following packages: torch v.2.1.1, tqdm v.4.66.1, h5py v.3.10.0, matplotlib v.3.8.2, networkx v.3.2.1, numpy v.1.21.6, pandas v.2.0.3, scipy v.1.11.4 and sklearn v.0.0. We used PyCharm v.2023.1.2 and Visual Studio Code v.1.83.1 software.

References

  1. Wang, H. et al. Scientific discovery in the age of artificial intelligence. Nature 620, 47–60 (2023).

    Article  Google Scholar 

  2. Schuetz, M. J. A., Brubaker, J. K. & Katzgraber, H. G. Combinatorial optimization with physics-inspired graph neural networks. Nat. Mach. Intell. 4, 367–377 (2022).

    Article  Google Scholar 

  3. Cappart, Q. et al. Combinatorial optimization and reasoning with graph neural networks. J. Mach. Learn. Res. 24, 1–61 (2023).

    MathSciNet  Google Scholar 

  4. Khalil, E., Le Bodic, P., Song, L., Nemhauser, G. & Dilkina, B. Learning to branch in mixed integer programming. In Proc. 30th AAAI Conference on Artificial Intelligence 724–731 (AAAI, 2016).

  5. Bai, Y. et al. Simgnn: a neural network approach to fast graph similarity computation. In Proc. 12th ACM International Conference on Web Search and Data Mining 384–392 (ACM, 2019).

  6. Gasse, M., Chételat, D., Ferroni, N., Charlin, L. & Lodi, A. Exact combinatorial optimization with graph convolutional neural networks. In Proc. Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 15580–15592 (NeurIPS, 2019).

  7. Nair, V. et al. Solving mixed integer programs using neural networks. Preprint at https://arXiv.org/2012.13349 (2020).

  8. Li, Z., Chen, Q. & Koltun, V. Combinatorial optimization with graph convolutional networks and guided tree search. In Proc. Advances in Neural Information Processing Systems 31 (eds Bengio, S. et al.) 537–546 (NeurIPS, 2018).

  9. Karalias, N. & Loukas, A. Erdos goes neural: an unsupervised learning framework for combinatorial optimization on graphs. In Proc. Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 6659–6672 (NeurIPS, 2020).

  10. Toenshoff, J., Ritzert, M., Wolf, H. & Grohe, M. Graph neural networks for maximum constraint satisfaction. Front. Artif. Intell. 3, 580607 (2021).

    Article  Google Scholar 

  11. Mirhoseini, A. et al. A graph placement methodology for fast chip design. Nature 594, 207–212 (2021).

    Article  Google Scholar 

  12. Yolcu, E. & Póczos, B. Learning local search heuristics for boolean satisfiability. In Proc. Advances in Neural Information Processing Systems 32 (eds Wallach, H. et al.) 7992–8003 (NeurIPS, 2019).

  13. Ma, Q., Ge, S., He, D., Thaker, D. & Drori, I. Combinatorial optimization by graph pointer networks and hierarchical reinforcement learning. Preprint at https://arXiv.org/1911.04936 (2019).

  14. Kool, W., Van Hoof, H. & Welling, M. Attention, learn to solve routing problems! In International Conference on Learning Representations (ICLR, 2018).

  15. Asghari, M., Fathollahi-Fard, A. M., Mirzapour Al-E-Hashem, S. M. J. & Dulebenets, M. A. Transformation and linearization techniques in optimization: a state-of-the-art survey. Mathematics 10, 283 (2022).

    Article  Google Scholar 

  16. Feng, S. et al. Hypergraph models of biological networks to identify genes critical to pathogenic viral response. BMC Bioinformatics 22, 1–21 (2021).

    Article  Google Scholar 

  17. Murgas, K. A., Saucan, E. & Sandhu, R. Hypergraph geometry reflects higher-order dynamics in protein interaction networks. Sci. Rep. 12, 20879 (2022).

    Article  Google Scholar 

  18. Zhu, J., Zhu, J., Ghosh, S., Wu, W. & Yuan, J. Social influence maximization in hypergraph in social networks. IEEE Trans. Netw. Sci. Eng. 6, 801–811 (2018).

    Article  MathSciNet  Google Scholar 

  19. Xia, L., Zheng, P., Huang, X. & Liu, C. A novel hypergraph convolution network-based approach for predicting the material removal rate in chemical mechanical planarization. J. Intell. Manuf. 33, 2295–2306 (2022).

    Article  Google Scholar 

  20. Wen, Y., Gao, Y., Liu, S., Cheng, Q. & Ji, R. Hyperspectral image classification with hypergraph modelling. In Proc. 4th International Conference on Internet Multimedia Computing and Service 34–37 (ACM, 2012).

  21. Feng, Y., You, H., Zhang, Z., Ji, R. & Gao, Y. Hypergraph neural networks. In Proc. 33rd AAAI Conference on Artificial Intelligence 3558–3565 (AAAI, 2019).

  22. Angelini, M. C. & Ricci-Tersenghi, F. Modern graph neural networks do worse than classical greedy algorithms in solving combinatorial optimization problems like maximum independent set. Nature Mach. Intell. 5, 29–31 (2023).

  23. Kirkpatrick, S., Gelatt Jr, C. D. & Vecchi, M. P. Optimization by simulated annealing. Science 220, 671–680 (1983).

    Article  MathSciNet  Google Scholar 

  24. Kingma, D. P. & Ba, J. Adam: a method for stochastic optimization. Preprint at https://arXiv.org/1412.6980 (2014).

  25. Benlic, U. & Hao, J.-K. Breakout local search for the max-cutproblem. Eng. Appl. Artif. Intell. 26, 1162–1173 (2013).

    Article  Google Scholar 

  26. APS dataset on Physical Review Journals, published by the American Physical Society, https://journals.aps.org/datasets (n.d.)

  27. Ye, Y. The gset dataset, https://web.stanford.edu/~yyye/yyye/Gset (Stanford, 2003).

  28. Hu, W. et al. Open graph benchmark: datasets for machine learning on graphs. In Proc. Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) 22118–22133 (2020).

  29. Ndc-substances dataset. Cornell https://www.cs.cornell.edu/~arb/data/NDC-substances/ (2018).

  30. Benson, A. R., Abebe, R., Schaub, M. T., Jadbabaie, A. & Kleinberg, J. Simplicial closure and higher-order link prediction. Proc. Natl Acad. Sci. USA 115, E11221–E11230 (2018).

  31. Hoos, H. H., & Stützle, T. SATLIB: An online resource for research on SAT. Sat, 2000, 283–292 (2000).

  32. Heydaribeni, N., Zhan, X., Zhang, R., Eliassi-Rad, T. & Koushanfar, F. Source code for ‘Distributed constrained combinatorial optimization leveraging hypergraph neural networks’. Code Ocean https://doi.org/10.24433/CO.4804643.v1 (2024).

Download references

Acknowledgements

We acknowledge the support of the MURI programme of the Army Research Office under award no. W911NF-21-1-0322 and the National Science Foundation AI Institute for Learning-Enabled Optimization at Scale under award no. 2112665.

Author information

Authors and Affiliations

Authors

Contributions

All authors participated in developing the ideas implemented in the article, with N.H. taking the lead. The code was developed by X.Z., N.H. and R.Z. Experiment design and execution were carried out by N.H. and R.Z. The paper was initially drafted by N.H. and was later revised by F.K. F.K. and T.E.-R. supervised the work and reviewed the paper.

Corresponding author

Correspondence to Nasimeh Heydaribeni.

Ethics declarations

Competing interests

N.H., R.Z., T.E.-R. and F.K are listed as inventors on a patent application (serial number 63/641,601) on distributed constrained combinatorial optimization leveraging HyperGNNs. X.Z. declares no competing interests.

Peer review

Peer review information

Nature Machine Intelligence thanks Petar Veličković and Haoyu Wang for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 HypOp vs. Bipartite GNN.

Comparison of HypOp with the bipartite GNN baseline for hypergraph MaxCut problem on synthetic random hypergraphs. For almost the same performance (a), HypOp has a remarkably less run time compared to the bipartite GNN baseline (b). HypOp performance is presented as the average of the results from 10 sets of experiments, with the error region showing the standard deviation of the results.

Extended Data Fig. 2 Transfer Learning.

Transfer Learning using HypOp from MaxCut to MIS problem on random regular graphs with d = 3. For almost the same performance (a), transfer learning provides the results in almost no amount of time compared to vanilla training (b).

Extended Data Fig. 3 Transfer Learning.

Transfer Learning using HypOp from Hypergraph MaxCut to Hypergraph MinCut on synthetic random hypergraphs. Compared to vanilla training, similar or better results are obtained using transfer learning (a) in a considerable less amount of time (b). Note that in the context of the Hypergraph MinCut problem, smaller cut sizes are favored.

Supplementary information

Supplementary Information

Supplementary Discussion, Figs 1–6 and Tables 1–3.

Reporting Summary

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Heydaribeni, N., Zhan, X., Zhang, R. et al. Distributed constrained combinatorial optimization leveraging hypergraph neural networks. Nat Mach Intell 6, 664–672 (2024). https://doi.org/10.1038/s42256-024-00833-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1038/s42256-024-00833-7

Search

Quick links

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics