An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity | Mathematical Programming Skip to main content
Log in

An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

An adaptive regularization algorithm using inexact function and derivatives evaluations is proposed for the solution of composite nonsmooth nonconvex optimization. It is shown that this algorithm needs at most \(O(|\log (\epsilon )|\,\epsilon ^{-2})\) evaluations of the problem’s functions and their derivatives for finding an \(\epsilon \)-approximate first-order stationary point. This complexity bound therefore generalizes that provided by Bellavia et al. (Theoretical study of an adaptive cubic regularization method with dynamic inexact Hessian information. arXiv:1808.06239, 2018) for inexact methods for smooth nonconvex problems, and is within a factor \(|\log (\epsilon )|\) of the optimal bound known for smooth and nonsmooth nonconvex minimization with exact evaluations. A practically more restrictive variant of the algorithm with worst-case complexity \(O(|\log (\epsilon )|+\epsilon ^{-2})\) is also presented.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. For adaptive regularization with Lipschitz model and dynamic accuracy.

References

  1. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)

    Article  MathSciNet  Google Scholar 

  2. Bellavia, S., Gurioli, G., Morini, B.: Theoretical study of an adaptive cubic regularization method with dynamic inexact Hessian information. arXiv:1808.06239 (2018)

  3. Bellavia, S., Gurioli, G., Morini, B., Toint, P.L.: Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. SIAM J. Optim. 29(4), 2881–2915 (2020)

    Article  MathSciNet  Google Scholar 

  4. Bergou, E., Diouane, Y., Kungurtsev, V., Royer, C.W.: A subsampling line-search method with second-order results. arXiv:1810.07211 (2018)

  5. Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, P.L.: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Math. Program. Ser. A 163(1), 359–368 (2017)

    Article  MathSciNet  Google Scholar 

  6. Blanchet, J., Cartis, C., Menickelly, M., Scheinberg, K.: Convergence rate analysis of a stochastic trust region method via supermartingales. INFORMS J. Opt. 1(2), 92–119 (2019)

    Article  MathSciNet  Google Scholar 

  7. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    MATH  Google Scholar 

  8. Carter, R.G.: Numerical experience with a class of algorithms for nonlinear optimization using inexact function and gradient information. SIAM J. Sci. Stat. Comput. 14(2), 368–388 (1993)

    Article  MathSciNet  Google Scholar 

  9. Cartis, C., Gould, N.I.M., Toint, P.L.: On the evaluation complexity of composite function minimization with applications to nonconvex nonlinear programming. SIAM J. Optim. 21(4), 1721–1739 (2011)

    Article  MathSciNet  Google Scholar 

  10. Cartis, C., Gould, N.I.M., Toint, P.L.: Sharp worst-case evaluation complexity bounds for arbitrary-order nonconvex optimization with inexpensive constraints. SIAM J. Opt. (to appear) (2020)

  11. Cartis, C., Scheinberg, K.: Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Math. Program. Ser. A 159(2), 337–375 (2018)

    Article  MathSciNet  Google Scholar 

  12. Chen, X., Jiang, B., Lin, T., Zhang, S.: On adaptive cubic regularization Newton’s methods for convex optimization via random sampling. arXiv:1802.05426 (2018)

  13. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2000)

    Book  Google Scholar 

  14. Davis, D., Drusvyatskiy, D.: Stochastic model-based minimization of weakly convex functions. SIAM J. Optim. 29(1), 207–239 (2019)

    Article  MathSciNet  Google Scholar 

  15. Donoho, D.L.: Compressed sensing. IEEE Trans. Inform. Theory 52(4), 1289–1306 (2006)

    Article  MathSciNet  Google Scholar 

  16. Duchi, J., Ruan, F.: Stochastic methods for composite and weakly convex optimization problems. SIAM J. Optim. 28(4), 3229–3259 (2018)

    Article  MathSciNet  Google Scholar 

  17. Gratton, S., Sartenaer, A., Toint, P.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19(1), 414–444 (2008)

    Article  MathSciNet  Google Scholar 

  18. Gratton, S., Toint, P.L.: A note on solving nonlinear optimization problems in variable precision. Optim. Methods Softw. (2020, to appear)

  19. Hansen, P.C.: Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM, Philadelphia (1998)

    Book  Google Scholar 

  20. Hare, W., Sagastizábal, C., Solodov, M.: A proximal bundle method for nonsmooth nonconvex functions with inexact information. Comput. Optim. Appl. 63, 1–28 (2016)

    Article  MathSciNet  Google Scholar 

  21. Kouri, D.P., Heinkenscloss, M., Rizdal, D., van Bloemen-Waanders, B.G.: Inexact objective function evaluations in a trust-region algorithm for PDE-constrained optimization under uncertainty. SISC 36(6), A3011–A3029 (2014)

    Article  MathSciNet  Google Scholar 

  22. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  23. Lewis, A.S., Wright, S.J.: A proximal method for composite minimization. Math. Program. Ser. A 158, 501–546 (2016)

    Article  MathSciNet  Google Scholar 

  24. Liu, L., Liu, X., Hsieh, C.-J., Tao, D.: Stochastic second-order methods for non-convex optimization with inexact Hessian and gradient. arXiv:1809.09853 (2018)

  25. Nesterov, Y.: Introductory Lectures on Convex Optimization. Applied Optimization. Kluwer Academic Publishers, Dordrecht (2004)

    Book  Google Scholar 

  26. Reddi, S., Sra, S., Póczos, B., Smola, A.: Proximal stochastic methods for nonsmooth nonconvex finite-sum optimization. In: Lee, D.D., Sugiyama, M., Luxburg, U.V., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 1145–1153. Curran Associates, Inc., Red Hook (2016)

    Google Scholar 

  27. Roosta-Khorani, F., Mahoney, M.W.: Sub-sampled Newton methods. Math. Program. Ser. A 174(1–2), 293–326 (2019)

    Article  MathSciNet  Google Scholar 

  28. Tibshirani, R.: Regression shrinkage and selection via the LASSO. J. R. Stat. Soc. B 58(1), 267–288 (1996)

    MathSciNet  MATH  Google Scholar 

  29. Wang, N., Choi, J., Brand, D., Chen, C.-Y., Gopalakrishnan, K.: Training deep neural networks with 8-bit floating point numbers. In: 32nd Conference on Neural Information Processing Systems (2018)

  30. Xu, P., Roosta-Khorasani, F., Mahoney, M.W.: Newton-type methods for non-convex optimization under inexact Hessian information. arXiv:1708.07164v3 (2017)

  31. Yuan, Y.: Conditions for convergence of trust region algorithms for nonsmooth optimization. Math. Program. 31(2), 220–228 (1985)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The third author is grateful to the ENSEEIHT (INP, Toulouse) for providing a friendly research environment for the duration of this research project.

Funding

Funding was provided by Agence Nationale de Recherche (FR) (Grant No. ANR-11-LABX-0040-CIMI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ph. L. Toint.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Ph. L. Toint: Partially supported by ANR-11-LABX-0040-CIMI within the program ANR-11-IDEX-0002-02.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gratton, S., Simon, E. & Toint, P.L. An algorithm for the minimization of nonsmooth nonconvex functions using inexact evaluations and its worst-case complexity. Math. Program. 187, 1–24 (2021). https://doi.org/10.1007/s10107-020-01466-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-020-01466-5

Keywords

Mathematics Subject Classification

Navigation