Learning Deterministic Surrogates for Robust Convex QCQPs | SpringerLink
Skip to main content

Learning Deterministic Surrogates for Robust Convex QCQPs

  • Conference paper
  • First Online:
Integration of Constraint Programming, Artificial Intelligence, and Operations Research (CPAIOR 2024)

Abstract

Decision-focused learning is a promising development for contextual optimisation. It enables us to train prediction models that reflect the conditional sensitivity and uncertainty structure of the problem. However, there have been limited attempts to extend this paradigm to robust optimisation. We propose a double implicit layer model for training prediction models with respect to robust decision loss in uncertain convex quadratically constrained quadratic programs (QCQP). The first layer solves a deterministic version of the problem, the second layer evaluates the worst case realisation for an uncertainty set centred on the observation given the decisions obtained from the first layer. This enables us to learn model parameterisations that lead to robust decisions while only solving a simpler deterministic problem at test time. Additionally, instead of having to solve a robust counterpart we solve two smaller and potentially easier problems in training. The second layer (worst case problem) can be seen as a regularisation approach for predict-and-optimise by fitting to a neighbourhood of problems instead of just a point observation. We motivate a reformulation of the worst-case problem for a case of uncertainty sets that would otherwise lead to trust region problems. Both layers are typically strictly convex in this problem setting and thus have meaningful gradients almost everywhere. We demonstrate an application of this model on simulated experiments. The method is an effective regularisation tool for decision-focused learning for uncertain convex QCQPs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 14871
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9437
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Preliminary code available at: https://github.com/EgoPer/Deterministic-Surrogates-for-Uncertain-Convex-QCQP.

References

  1. Agrawal, A., Amos, B., Barratt, S., Boyd, S., Diamond, S., Kolter, Z.: Differentiable convex optimization layers. Adv. Neural Inf. Process. Syst. (2019)

    Google Scholar 

  2. Agrawal, A., Barratt, S., Boyd, S., Busseti, E., Moursi, W.M.: Differentiating through a cone program. arXiv preprint arXiv:1904.09043 (2019)

  3. Amos, B., Kolter, J.Z.: Optnet: differentiable optimization as a layer in neural networks. In: International Conference on Machine Learning, pp. 136–145. PMLR (2017)

    Google Scholar 

  4. Ben-Tal, A., Hazan, E., Koren, T., Mannor, S.: Oracle-based robust optimization via online learning. Oper. Res. 63(3), 628–638 (2015)

    Article  MathSciNet  Google Scholar 

  5. Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)

    Article  MathSciNet  Google Scholar 

  6. Berthet, Q., Blondel, M., Teboul, O., Cuturi, M., Vert, J.P., Bach, F.: Learning with differentiable pertubed optimizers. Adv. Neural. Inf. Process. Syst. 33, 9508–9519 (2020)

    Google Scholar 

  7. Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)

    Article  MathSciNet  Google Scholar 

  8. Butler, A., Kwon, R.H.: Efficient differentiable quadratic programming layers: an ADMM approach. Comput. Optim. Appl. 84(2), 449–476 (2023)

    Article  MathSciNet  Google Scholar 

  9. Donti, P., Amos, B., Kolter, J.Z.: Task-based end-to-end model learning in stochastic optimization. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  10. Elmachtoub, A.N., Grigas, P.: Smart predict, then optimize. Manag. Sci. 68(1), 9–26 (2022)

    Google Scholar 

  11. Ferber, A.M., et al.: Surco: learning linear surrogates for combinatorial nonlinear optimization problems. In: International Conference on Machine Learning, pp. 10034–10052. PMLR (2023)

    Google Scholar 

  12. Kong, L., Cui, J., Zhuang, Y., Feng, R., Prakash, B.A., Zhang, C.: End-to-end stochastic optimization with energy-based model. Adv. Neural. Inf. Process. Syst. 35, 11341–11354 (2022)

    Google Scholar 

  13. Kroer, C., Ho-Nguyen, N., Lu, G., Kılınç-Karzan, F.: Performance evaluation of iterative methods for solving robust convex quadratic problems. In: 10th NIPS Workshop on Optimization for Machine Learning, Dec. vol. 8 (2017)

    Google Scholar 

  14. Mengi, E., Yildirim, E.A., Kilic, M.: Numerical optimization of eigenvalues of hermitian matrix functions. SIAM J. Matrix Anal. Appl. 35(2), 699–724 (2014)

    Article  MathSciNet  Google Scholar 

  15. Nesterov, Y., Nemirovskii, A.: Interior-point polynomial algorithms in convex programming. SIAM (1994)

    Google Scholar 

  16. Sadana, U., Chenreddy, A., Delage, E., Forel, A., Frejinger, E., Vidal, T.: A survey of contextual optimization methods for decision making under uncertainty. arXiv preprint arXiv:2306.10374 (2023)

  17. Schutte, N., Postek, K., Yorke-Smith, N.: Robust losses for decision-focused learning. arXiv preprint arXiv:2310.04328 (2023)

  18. Sun, H., Shi, Y., Wang, J., Tuan, H.D., Poor, H.V., Tao, D.: Alternating differentiation for optimization layers. arXiv preprint arXiv:2210.01802 (2022)

  19. Vlastelica, M., Paulus, A., Musil, V., Martius, G., Rolínek, M.: Differentiation of blackbox combinatorial solvers. In: International Conference on Learning Representations. ICLR’20, May 2020. https://openreview.net/forum?id=BkevoJSYPB

  20. Wang, P.W., Donti, P., Wilder, B., Kolter, Z.: Satnet: bridging deep learning and logical reasoning using a differentiable satisfiability solver. In: International Conference on Machine Learning, pp. 6545–6554. PMLR (2019)

    Google Scholar 

  21. Wilder, B., Dilkina, B., Tambe, M.: Melding the data-decisions pipeline: decision-focused learning for combinatorial optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1658–1665 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Egon Peršak .

Editor information

Editors and Affiliations

Ethics declarations

Disclosure of Interests

The authors have no competing interests to declare that are relevant to the content of this article.

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Peršak, E., Anjos, M.F. (2024). Learning Deterministic Surrogates for Robust Convex QCQPs. In: Dilkina, B. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2024. Lecture Notes in Computer Science, vol 14743. Springer, Cham. https://doi.org/10.1007/978-3-031-60599-4_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-60599-4_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-60601-4

  • Online ISBN: 978-3-031-60599-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics