A globally convergent Levenberg–Marquardt method for equality-constrained optimization | Computational Optimization and Applications Skip to main content
Log in

A globally convergent Levenberg–Marquardt method for equality-constrained optimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

It is well-known that the Levenberg–Marquardt method is a good choice for solving nonlinear equations, especially in the cases of singular/nonisolated solutions. We first exhibit some numerical experiments with local convergence, showing that this method for “generic” equations actually also works very well when applied to the specific case of the Lagrange optimality system, i.e., to the equation given by the first-order optimality conditions for equality-constrained optimization. In particular, it appears to outperform not only the basic Newton method applied to such systems, but also its modifications supplied with dual stabilization mechanisms, intended specially for tackling problems with nonunique Lagrange multipliers. The usual globalizations of the Levenberg–Marquardt method are based on linesearch for the squared Euclidean residual of the equation being solved. In the case of the Lagrange optimality system, this residual does not involve the objective function of the underlying optimization problem (only its derivative), and in particular, the resulting globalization scheme has no preference for converging to minima versus maxima, or to any other stationary point. We thus develop a special globalization of the Levenberg–Marquardt method when it is applied to the Lagrange optimality system, based on linesearch for a smooth exact penalty function of the optimization problem, which in particular involves the objective function of the problem. The algorithm is shown to have appropriate global convergence properties, preserving also fast local convergence rate under weak assumptions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  2. Bertsekas, D.P.: Enlarging the region of convergence of Newton’s method for constrained optimization. J. Optim. Theory Appl. 36, 221–252 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  3. Di Pillo, G., Grippo, L.: A new class of augmented Lagrangians in nonlinear programming. SIAM J. Control Optim. 17, 618–628 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dolan, E.D., More, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  5. Facchinei, F., Fischer, A., Herrich, M.: A family of Newton methods for nonsmooth constrained systems with nonisolated solutions. Math. Methods Oper. Res. 77, 433–443 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  6. Fan, J.-Y., Yuan, Y.-X.: On the quadratic convergence of the Levenberg–Marquardt method. Computing 74, 23–39 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  7. Fernández, D., Pilotta, E.A., Torres, G.A.: An inexact restoration strategy for the globalization of the sSQP method. Comput. Optim. Appl. 54, 595–617 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  8. Fernández, D., Solodov, M.: Stabilized sequential quadratic programming for optimization and a stabilized Newton-type method for variational problems. Math. Program. 125, 47–73 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  9. Fischer, A., Herrich, M., Izmailov, A.F., Solodov, M.V.: Convergence conditions for Newton-type methods applied to complementarity systems with nonisolated solutions. Comput. Optim. Appl. 63, 425–459 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Gill, P.E., Kungurtsev, V., Robinson, D.P.: A stabilized SQP method: global convergence. IMA J. Numer. Anal. 37, 407–443 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  11. Hager, W.W.: Stabilized sequential quadratic programming. Comput. Optimizat. Appl. 12, 253–273 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  12. Izmailov, A.F., Solodov, M.V.: On attraction of Newton-type iterates to multipliers violating second-order sufficiency conditions. Math. Program. 117, 271–304 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  13. Izmailov, A.F., Solodov, M.V.: Stabilized SQP revisited. Math. Program. 133, 93–120 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Izmailov, A.F., Solodov, M.V.: Critical Lagrange multipliers: what we currently know about them, how they spoil our lives, and what we can do about it. TOP 23, 1–26 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  15. Izmailov, A.F., Solodov, M.V.: Newton-Type Methods for Optimization and Variational Problems. Springer Series in Operations Research and Financial Engineering. Springer, Basel (2014)

    Google Scholar 

  16. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Combining stabilized SQP with the augmented Lagrangian algorithm. Comput. Optim. Appl. 62, 405–429 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  17. Izmailov, A.F., Solodov, M.V., Uskov, E.I.: Globalizing stabilized sequential quadratic programming method by smooth primal–dual exact penalty function. J. Optim. Theory Appl. 169, 148–178 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  18. Izmailov, A.F., Uskov, E.I.: Subspace-stabilized sequential quadratic programming. Comput. Optim. Appl. 67, 129–154 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Nocedal, J., Wright, S.J.: Numerical Optimization, Second edn. Springer, New York (2006)

    MATH  Google Scholar 

  20. SQPlab. https://who.rocq.inria.fr/Jean-Charles.Gilbert/modulopt/optimization-routines/sqplab/sqplab.html. Accessed Dec 2006

  21. Wright, S.J.: Superlinear convergence of a stabilized SQP method to a degenerate solution. Comput. Optim. Appl. 11, 253–275 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  22. Wright, S.J.: Modifying SQP for degenerate problems SIAM. J. Optim. 13, 470–497 (2002)

    MathSciNet  MATH  Google Scholar 

  23. Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg–Marquardt method. Comput. Suppl. 15, 237–249 (2001)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the two anonymous referees for their constructive comments. The research of the first author was supported by the Russian Science Foundation Grant 17-11-01168 (Sect. 5). The second author is supported in part by CNPq Grant 303724/2015-3 and by FAPERJ Grant 203.052/2016. The third author is supported by the Volkswagen Foundation, and by the Russian Foundation for Basic Research Grant 17-01-00125.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. V. Solodov.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Izmailov, A.F., Solodov, M.V. & Uskov, E.I. A globally convergent Levenberg–Marquardt method for equality-constrained optimization. Comput Optim Appl 72, 215–239 (2019). https://doi.org/10.1007/s10589-018-0038-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-018-0038-7

Keywords

Mathematics Subject Classification

Navigation