Abstract
Conjugate gradient methods are widely used for solving nonlinear optimization problems. In some practical problems, we can only get approximate values of the objective function and its gradient. It is necessary to consider optimization algorithms that use inexact function evaluations and inexact gradients. In this paper, we propose an inexact nonlinear conjugate gradient (INCG) method to solve such problems. Under some mild conditions, the global convergence of INCG is proved. Specifically, we establish the linear convergence of INCG when the objective function is strongly convex. Numerical results demonstrate that, compared to the state-of-the-art algorithms, INCG is an effective method.
Similar content being viewed by others
References
Andrei, N.: An unconstrained optimization test functions collection. Adv. Model. Optim. 10(1), 147–161 (2008)
Andrei, N.: Nonlinear Conjugate Gradient Methods for Unconstrained Optimization, Springer Optimization and Its Applications, vol. 158. Springer, Cham (2020)
Bellavia, S., Gurioli, G., Morini, B., Toint, P.L.: Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. SIAM J. Optim. 29(4), 2881–2915 (2019)
Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)
Burden, R.L., Faires, J.D., Reynolds, A.C.: Numerical Analysis. Prindle, Weber & Schmidt, Boston, Mass (1978)
Carter, R.G.: Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information. SIAM J. Sci. Comput. 14(2), 368–388 (1993)
Cartis, C., Gould, N.I.M., Toint, P.L.: Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints. SIAM J. Optim. 30(1), 513–541 (2020)
Cheng, W.: A two-term PRP-based descent method. Numer. Funct. Anal. Optim. 28(11–12), 1217–1230 (2007)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization, MPS/SIAM Series on Optimization, vol. 8. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2009)
Dai, Y.H., Yuan, Y.: A nonlinear conjugate gradient method with a strong global convergence property. SIAM J. Optim. 10(1), 177–182 (1999)
Devolder, O.: Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization. Ph.D. thesis, ICTEAM and CORE, Université Catholique de Louvain (2013)
Devolder, O., Glineur, F., Nesterov, Y.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. 146(1–2), 37–75 (2014)
Dong, X., Han, D., Dai, Z., Li, L., Zhu, J.: An accelerated three-term conjugate gradient method with sufficient descent condition and conjugacy condition. J. Optim. Theory Appl. 179(3), 944–961 (2018)
Fletcher, R., Reeves, C.M.: Function minimization by conjugate gradients. Comput. J. 7, 149–154 (1964)
Gannot, O.: A frequency-domain analysis of inexact gradient methods. Math. Program. 194(1–2), 975–1016 (2022)
Gilbert, J.C., Nocedal, J.: Global convergence properties of conjugate gradient methods for optimization. SIAM J. Optim. 2(1), 21–42 (1992)
Hager, W.W., Zhang, H.: A new conjugate gradient method with guaranteed descent and an efficient line search. SIAM J. Optim. 16(1), 170–192 (2005)
Hager, W.W., Zhang, H.: A survey of nonlinear conjugate gradient methods. Pac. J. Optim. 2(1), 35–58 (2006)
Hestenes, M.R., Stiefel, E.: Methods of conjugate gradients for solving linear systems. J. Res. Nat. Bur. Stand. 49, 409–436 (1952)
Hu, Y.F., Storey, C.: Global convergence result for conjugate gradient methods. J. Optim. Theory Appl. 71(2), 399–405 (1991)
Larson, J., Menickelly, M., Wild, S.M.: Derivative-free optimization methods. Acta Numer 28, 287–404 (2019)
Li, Q.: Conjugate gradient type methods for the nondifferentiable convex minimization. Optim. Lett. 7(3), 533–545 (2013)
Nesterov, Y.: Lectures on Convex Optimization, Springer Optimization and its Applications, vol. 137. Springer, Cham (2018)
Nocedal, J., Wright, S.J.: Numerical Optimization. Springer, New York (2006)
Polak, E., Ribière, G.: Note sur la convergence de méthodes de directions conjuguées. Rev. Française Informat. Recherche Opérationnelle 3(16), 35–43 (1969)
Polyak, B.T.: The conjugate gradient method in extremal problems. USSR Comput. Math. Math. Phys. 9(4), 94–112 (1969)
Sun, W., Yuan, Y.X.: Optimization Theory and Methods, Springer Optimization and its Applications, vol. 1. Springer, New York (2006)
Yuan, G., Duan, X., Liu, W., Wang, X., Cui, Z., Sheng, Z.: Two new PRP conjugate gradient algorithms for minimization optimization models. PLoS ONE 10(10), e0140071 (2015)
Zhang, L., Zhou, W., Li, D.H.: A descent modified Polak-Ribière-Polyak conjugate gradient method and its global convergence. IMA J. Numer. Anal. 26(4), 629–640 (2006)
Acknowledgements
This work was supported by the National Natural Science Foundation of China NSFC-11971118. The authors are grateful to the associate editor and the two anonymous referees for their valuable comments and suggestions. Their comments and suggestions have improved the presentation of the paper significantly.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Emanuele Galligani.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhao, T., Yang, W.H. A Nonlinear Conjugate Gradient Method Using Inexact First-Order Information. J Optim Theory Appl 198, 502–530 (2023). https://doi.org/10.1007/s10957-023-02243-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-023-02243-y
Keywords
- Nonlinear conjugate gradient methods
- Inexact first-order information
- Inexact gradient method
- Global convergence