Abstract
Decision-focused learning is a promising development for contextual optimisation. It enables us to train prediction models that reflect the conditional sensitivity and uncertainty structure of the problem. However, there have been limited attempts to extend this paradigm to robust optimisation. We propose a double implicit layer model for training prediction models with respect to robust decision loss in uncertain convex quadratically constrained quadratic programs (QCQP). The first layer solves a deterministic version of the problem, the second layer evaluates the worst case realisation for an uncertainty set centred on the observation given the decisions obtained from the first layer. This enables us to learn model parameterisations that lead to robust decisions while only solving a simpler deterministic problem at test time. Additionally, instead of having to solve a robust counterpart we solve two smaller and potentially easier problems in training. The second layer (worst case problem) can be seen as a regularisation approach for predict-and-optimise by fitting to a neighbourhood of problems instead of just a point observation. We motivate a reformulation of the worst-case problem for a case of uncertainty sets that would otherwise lead to trust region problems. Both layers are typically strictly convex in this problem setting and thus have meaningful gradients almost everywhere. We demonstrate an application of this model on simulated experiments. The method is an effective regularisation tool for decision-focused learning for uncertain convex QCQPs.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Preliminary code available at: https://github.com/EgoPer/Deterministic-Surrogates-for-Uncertain-Convex-QCQP.
References
Agrawal, A., Amos, B., Barratt, S., Boyd, S., Diamond, S., Kolter, Z.: Differentiable convex optimization layers. Adv. Neural Inf. Process. Syst. (2019)
Agrawal, A., Barratt, S., Boyd, S., Busseti, E., Moursi, W.M.: Differentiating through a cone program. arXiv preprint arXiv:1904.09043 (2019)
Amos, B., Kolter, J.Z.: Optnet: differentiable optimization as a layer in neural networks. In: International Conference on Machine Learning, pp. 136–145. PMLR (2017)
Ben-Tal, A., Hazan, E., Koren, T., Mannor, S.: Oracle-based robust optimization via online learning. Oper. Res. 63(3), 628–638 (2015)
Ben-Tal, A., Nemirovski, A.: Robust convex optimization. Math. Oper. Res. 23(4), 769–805 (1998)
Berthet, Q., Blondel, M., Teboul, O., Cuturi, M., Vert, J.P., Bach, F.: Learning with differentiable pertubed optimizers. Adv. Neural. Inf. Process. Syst. 33, 9508–9519 (2020)
Bertsimas, D., Brown, D.B., Caramanis, C.: Theory and applications of robust optimization. SIAM Rev. 53(3), 464–501 (2011)
Butler, A., Kwon, R.H.: Efficient differentiable quadratic programming layers: an ADMM approach. Comput. Optim. Appl. 84(2), 449–476 (2023)
Donti, P., Amos, B., Kolter, J.Z.: Task-based end-to-end model learning in stochastic optimization. Adv. Neural Inf. Process. Syst. 30 (2017)
Elmachtoub, A.N., Grigas, P.: Smart predict, then optimize. Manag. Sci. 68(1), 9–26 (2022)
Ferber, A.M., et al.: Surco: learning linear surrogates for combinatorial nonlinear optimization problems. In: International Conference on Machine Learning, pp. 10034–10052. PMLR (2023)
Kong, L., Cui, J., Zhuang, Y., Feng, R., Prakash, B.A., Zhang, C.: End-to-end stochastic optimization with energy-based model. Adv. Neural. Inf. Process. Syst. 35, 11341–11354 (2022)
Kroer, C., Ho-Nguyen, N., Lu, G., Kılınç-Karzan, F.: Performance evaluation of iterative methods for solving robust convex quadratic problems. In: 10th NIPS Workshop on Optimization for Machine Learning, Dec. vol. 8 (2017)
Mengi, E., Yildirim, E.A., Kilic, M.: Numerical optimization of eigenvalues of hermitian matrix functions. SIAM J. Matrix Anal. Appl. 35(2), 699–724 (2014)
Nesterov, Y., Nemirovskii, A.: Interior-point polynomial algorithms in convex programming. SIAM (1994)
Sadana, U., Chenreddy, A., Delage, E., Forel, A., Frejinger, E., Vidal, T.: A survey of contextual optimization methods for decision making under uncertainty. arXiv preprint arXiv:2306.10374 (2023)
Schutte, N., Postek, K., Yorke-Smith, N.: Robust losses for decision-focused learning. arXiv preprint arXiv:2310.04328 (2023)
Sun, H., Shi, Y., Wang, J., Tuan, H.D., Poor, H.V., Tao, D.: Alternating differentiation for optimization layers. arXiv preprint arXiv:2210.01802 (2022)
Vlastelica, M., Paulus, A., Musil, V., Martius, G., Rolínek, M.: Differentiation of blackbox combinatorial solvers. In: International Conference on Learning Representations. ICLR’20, May 2020. https://openreview.net/forum?id=BkevoJSYPB
Wang, P.W., Donti, P., Wilder, B., Kolter, Z.: Satnet: bridging deep learning and logical reasoning using a differentiable satisfiability solver. In: International Conference on Machine Learning, pp. 6545–6554. PMLR (2019)
Wilder, B., Dilkina, B., Tambe, M.: Melding the data-decisions pipeline: decision-focused learning for combinatorial optimization. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 1658–1665 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Peršak, E., Anjos, M.F. (2024). Learning Deterministic Surrogates for Robust Convex QCQPs. In: Dilkina, B. (eds) Integration of Constraint Programming, Artificial Intelligence, and Operations Research. CPAIOR 2024. Lecture Notes in Computer Science, vol 14743. Springer, Cham. https://doi.org/10.1007/978-3-031-60599-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-60599-4_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-60601-4
Online ISBN: 978-3-031-60599-4
eBook Packages: Computer ScienceComputer Science (R0)