{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T07:24:33Z","timestamp":1740122673844,"version":"3.37.3"},"reference-count":35,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2021,8,19]],"date-time":"2021-08-19T00:00:00Z","timestamp":1629331200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,8,19]],"date-time":"2021-08-19T00:00:00Z","timestamp":1629331200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000781","name":"European Research Council","doi-asserted-by":"publisher","award":["725594-time-data"],"id":[{"id":"10.13039\/501100000781","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001711","name":"Swiss National Science Foundation","doi-asserted-by":"crossref","award":["200021_178865\/1"],"id":[{"id":"10.13039\/501100001711","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/100007297","name":"Office of Naval Research Global","doi-asserted-by":"publisher","award":["N62909-17-1-211"],"id":[{"id":"10.13039\/100007297","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003475","name":"Hasler Stiftung","doi-asserted-by":"crossref","award":["16066"],"id":[{"id":"10.13039\/501100003475","id-type":"DOI","asserted-by":"crossref"}]},{"name":"EPFL Lausanne"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Comput Optim Appl"],"published-print":{"date-parts":[[2021,11]]},"abstract":"Abstract<\/jats:title>We propose a variance reduced algorithm for solving monotone variational inequalities. Without assuming strong monotonicity, cocoercivity, or boundedness of the domain, we prove almost sure convergence of the iterates generated by the algorithm to a solution. In the monotone case, the ergodic average converges with the optimal O<\/jats:italic>(1\/k<\/jats:italic>) rate of convergence. When strong monotonicity is assumed, the algorithm converges linearly, without requiring the knowledge of strong monotonicity constant. We finalize with extensions and applications of our results to monotone inclusions, a class of non-monotone variational inequalities and Bregman projections.<\/jats:p>","DOI":"10.1007\/s10589-021-00305-3","type":"journal-article","created":{"date-parts":[[2021,8,19]],"date-time":"2021-08-19T06:03:30Z","timestamp":1629353010000},"page":"321-346","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Forward-reflected-backward method with variance reduction"],"prefix":"10.1007","volume":"80","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2911-7048","authenticated-orcid":false,"given":"Ahmet","family":"Alacaoglu","sequence":"first","affiliation":[]},{"given":"Yura","family":"Malitsky","sequence":"additional","affiliation":[]},{"given":"Volkan","family":"Cevher","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,8,19]]},"reference":[{"key":"305_CR1","unstructured":"Alacaoglu, A., Fercoq, O., Cevher, V. On the convergence of stochastic primal-dual hybrid gradient. arXiv preprint arXiv:1911.00799, (2019)"},{"key":"305_CR2","unstructured":"Balamurugan, P., and Bach. F. Stochastic variance reduction methods for saddle-point problems. In Advances in Neural Information Processing Systems, pages 1416\u20131424, (2016)"},{"issue":"2","key":"305_CR3","doi-asserted-by":"publisher","first-page":"163","DOI":"10.1007\/s10107-011-0472-0","volume":"129","author":"DP Bertsekas","year":"2011","unstructured":"Bertsekas, D.P.: Incremental proximal methods for large scale convex optimization. Math. Prog. 129(2), 163 (2011)","journal-title":"Math. Prog."},{"key":"305_CR4","unstructured":"Bo\u0163 R.\u00a0I., Mertikopoulos, P., Staudigl, M., and Vuong, P. T. Forward-backward-forward methods with variance reduction for stochastic variational inequalities. arXiv preprint arXiv:1902.03355, (2019)"},{"key":"305_CR5","unstructured":"Carmon, Y., Jin, Y., Sidford, A., and Tian. K. Variance reduction for matrix games. In Advances in Neural Information Processing Systems, pages 11377\u201311388, (2019)"},{"issue":"1","key":"305_CR6","doi-asserted-by":"publisher","first-page":"120","DOI":"10.1007\/s10851-010-0251-1","volume":"40","author":"A Chambolle","year":"2011","unstructured":"Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imag. Vis. 40(1), 120\u2013145 (2011)","journal-title":"J. Math. Imag. Vis."},{"key":"305_CR7","unstructured":"Chavdarova, T., Gidel G., Fleuret., F and \u00a0Lacoste-Julien., S. Reducing noise in GAN training with variance reduced extragradient. In Advances in Neural Information Processing Systems, pages 391\u2013401, (2019)"},{"issue":"2","key":"305_CR8","doi-asserted-by":"publisher","first-page":"1221","DOI":"10.1137\/140971233","volume":"25","author":"PL Combettes","year":"2015","unstructured":"Combettes, P.L., Pesquet, J.-C.: Stochastic quasi-Fej\u00e9r block-coordinate fixed point iterations with random sweeping. SIAM J. Optim. 25(2), 1221\u20131248 (2015)","journal-title":"SIAM J. Optim."},{"key":"305_CR9","unstructured":"Cui., S and Shanbhag., UV. On the analysis of variance-reduced and randomized projection variants of single projection schemes for monotone stochastic variational inequality problems. arXiv preprint arXiv:1904.11076, (2019)"},{"issue":"2","key":"305_CR10","doi-asserted-by":"publisher","first-page":"277","DOI":"10.1007\/s10589-014-9673-9","volume":"60","author":"CD Dang","year":"2015","unstructured":"Dang, C.D., Lan, G.: On the convergence properties of non-euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Opt. Appl. 60(2), 277\u2013310 (2015)","journal-title":"Comput. Opt. Appl."},{"key":"305_CR11","unstructured":"Daskalakis. C., and Panageas. I. The limit points of (optimistic) gradient descent in min-max optimization. In Advances in Neural Information Processing Systems, pages 9236\u20139246, (2018)"},{"key":"305_CR12","unstructured":"Daskalakis, C., Ilyas, A., Syrgkanis, V., and Zeng, H. Training GANs with optimism. In International Conference on Learning Representations, (2018)"},{"key":"305_CR13","unstructured":"Defazio, A., Bach, F., and Lacoste-Julien, S. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, pages 1646\u20131654, (2014)"},{"issue":"4","key":"305_CR14","doi-asserted-by":"publisher","first-page":"1015","DOI":"10.1137\/09076934X","volume":"3","author":"E Esser","year":"2010","unstructured":"Esser, E., Zhang, X., Chan, T.F.: A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. SIAM J. Imag. Sci. 3(4), 1015\u20131046 (2010)","journal-title":"SIAM J. Imag. Sci."},{"key":"305_CR15","unstructured":"Hamedani, E. Y., and Aybat, N. S. A primal-dual algorithm for general convex-concave saddle point problems. arXiv:1803.01401, (2018)"},{"key":"305_CR16","unstructured":"Hofmann, T., Lucchi, A., Lacoste-Julien, S., and McWilliams, B. Variance reduced stochastic gradient descent with neighbors. In Advances in Neural Information Processing Systems, pages 2305\u20132313, (2015)"},{"issue":"2","key":"305_CR17","doi-asserted-by":"publisher","first-page":"686","DOI":"10.1137\/15M1031953","volume":"27","author":"AN Iusem","year":"2017","unstructured":"Iusem, A.N., Jofr\u00e9, A., Oliveira, R.I., Thompson, P.: Extragradient method with variance reduction for stochastic variational inequalities. SIAM J. Opt. 27(2), 686\u2013724 (2017)","journal-title":"SIAM J. Opt."},{"key":"305_CR18","unstructured":"Johnson, R., and Zhang. T. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315\u2013323, (2013)"},{"issue":"1","key":"305_CR19","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1287\/10-SSY011","volume":"1","author":"A Juditsky","year":"2011","unstructured":"Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1(1), 17\u201358 (2011)","journal-title":"Stoch. Syst."},{"key":"305_CR20","first-page":"747","volume":"12","author":"G Korpelevich","year":"1976","unstructured":"Korpelevich, G.: The extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747\u2013756 (1976)","journal-title":"Ekon. Mat. Metody"},{"key":"305_CR21","unstructured":"Kovalev, D., Horvath, S., and Richtarik. P. Don\u2019t jump through hoops and remove those loops: SVRG and Katyusha are better without the outer loop. In Proceedings of the 31st International Conference on Algorithmic Learning Theory, pages 451\u2013467, (2020)"},{"key":"305_CR22","doi-asserted-by":"publisher","first-page":"383","DOI":"10.1007\/s10107-019-01416-w","volume":"184","author":"Y Malitsky","year":"2019","unstructured":"Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Prog. 184, 383\u2013410 (2019)","journal-title":"Math. Prog."},{"issue":"2","key":"305_CR23","doi-asserted-by":"publisher","first-page":"1451","DOI":"10.1137\/18M1207260","volume":"30","author":"Y Malitsky","year":"2020","unstructured":"Malitsky, Y., Tam, M.K.: A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM J. Optim. 30(2), 1451\u20131472 (2020)","journal-title":"SIAM J. Optim."},{"key":"305_CR24","unstructured":"Mertikopoulos, P., Lecouat, B., Zenati, H., Foo, C. S., Chandrasekhar, V., and Piliouras. G. Optimistic mirror descent in saddle-point problems: Going the extra(-gradient) mile. In International Conference on Learning Representations, (2019)"},{"key":"305_CR25","unstructured":"Mishchenko, K., Kovalev, D., Shulgin, E., Richt\u00e1rik, P., and Malitsky. Y.: Revisiting stochastic extragradient. In The 23rd International Conference on Artificial Intelligence and Statistics, pages 4573\u20134582. PMLR, (2020)"},{"key":"305_CR26","unstructured":"Mokhtari, A., Ozdaglar, A., and Pattathil. S.: A unified analysis of extra-gradient and optimistic gradient methods for saddle point problems: Proximal point approach. In International Conference on Artificial Intelligence and Statistics, pages 1497\u20131507. PMLR, (2020)"},{"issue":"1","key":"305_CR27","doi-asserted-by":"publisher","first-page":"229","DOI":"10.1137\/S1052623403425629","volume":"15","author":"A Nemirovski","year":"2004","unstructured":"Nemirovski, A.: Prox-method with rate of convergence $$O(1\/t)$$ for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Opt. 15(1), 229\u2013251 (2004)","journal-title":"SIAM J. Opt."},{"issue":"4","key":"305_CR28","doi-asserted-by":"publisher","first-page":"1574","DOI":"10.1137\/070704277","volume":"19","author":"A Nemirovski","year":"2009","unstructured":"Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Opt. 19(4), 1574\u20131609 (2009)","journal-title":"SIAM J. Opt."},{"issue":"2\u20133","key":"305_CR29","doi-asserted-by":"publisher","first-page":"319","DOI":"10.1007\/s10107-006-0034-z","volume":"109","author":"Y Nesterov","year":"2007","unstructured":"Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Prog. 109(2\u20133), 319\u2013344 (2007)","journal-title":"Math. Prog."},{"issue":"5","key":"305_CR30","first-page":"845","volume":"28","author":"LD Popov","year":"1980","unstructured":"Popov, L.D.: A modification of the Arrow-Hurwicz method for search of saddle points. Math. Notes Acad. Sci. USSR 28(5), 845\u2013848 (1980)","journal-title":"Math. Notes Acad. Sci. USSR"},{"key":"305_CR31","unstructured":"Rakhlin, S., and Sridharan. K, Optimization, learning, and games with predictable sequences. In Advances in Neural Information Processing Systems, pages 3066\u20133074, (2013)"},{"key":"305_CR32","doi-asserted-by":"crossref","unstructured":"Robbins, H., and Siegmund. D.: A convergence theorem for non negative almost supermartingales and some applications. In Optimizing methods in statistics, pages 233\u2013257. Elsevier, (1971)","DOI":"10.1016\/B978-0-12-604550-5.50015-8"},{"key":"305_CR33","unstructured":"Shalev-Shwartz, S., and Zhang. T: Stochastic dual coordinate ascent methods for regularized loss minimization. J. Mach. Learn. Res., 14(Feb):567\u2013599, (2013)"},{"issue":"2","key":"305_CR34","doi-asserted-by":"publisher","first-page":"431","DOI":"10.1137\/S0363012998338806","volume":"38","author":"P Tseng","year":"2000","unstructured":"Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Opt. 38(2), 431\u2013446 (2000)","journal-title":"SIAM J. Control Opt."},{"key":"305_CR35","unstructured":"Tseng. P.: On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM J. Opt., 1, (2008)"}],"container-title":["Computational Optimization and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10589-021-00305-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10589-021-00305-3\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10589-021-00305-3.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,9,23]],"date-time":"2021-09-23T12:46:23Z","timestamp":1632401183000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10589-021-00305-3"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,19]]},"references-count":35,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2021,11]]}},"alternative-id":["305"],"URL":"https:\/\/doi.org\/10.1007\/s10589-021-00305-3","relation":{},"ISSN":["0926-6003","1573-2894"],"issn-type":[{"type":"print","value":"0926-6003"},{"type":"electronic","value":"1573-2894"}],"subject":[],"published":{"date-parts":[[2021,8,19]]},"assertion":[{"value":"27 November 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 June 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 August 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The author declared that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Data availability"}}]}}