Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function | SpringerLink
Skip to main content

Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function

  • Conference paper
Advances in Neural Networks – ISNN 2007 (ISNN 2007)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4493))

Included in the following conference series:

Abstract

In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-Kuhn-Tucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 11439
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 14299
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Hopfield, J.J., Tank, D.W.: ‘Neural’ Computation of Decisions in Optimization Problems. Biol. Cybern. 52, 141–152 (1985)

    MATH  MathSciNet  Google Scholar 

  2. Tank, D.W., Hopfield, J.J.: Simple Neural Optimization Networks: An A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit. IEEE Trans. Circuits Syst. II 33, 533–541 (1986)

    Article  Google Scholar 

  3. Rodríguez-Vázquez, A., Domínguez-Castro, R., Rueda, A., Huertas, J.L., Sánchez-Sinencio, E.: Nonlinear Switched-Capacitor Neural Networks for Optimization Problems. IEEE Trans. Circuits Syst. 37, 384–397 (1990)

    Article  Google Scholar 

  4. Wang, J.: Analysis and Design of a Recurrent Neural Network for Linear Programming. IEEE Trans. Circuits Syst. I 40, 613–618 (1993)

    Article  MATH  Google Scholar 

  5. Wang, J.: A Deterministic Annealing Neural Network for Convex Programming. Neural Networks 7, 629–641 (1994)

    Article  MATH  Google Scholar 

  6. Bouzerdoum, A., Pattison, T.R.: Neural Network for Quadratic Optimization with Bound Constraints. IEEE Trans. Neural Networks 4, 293–304 (1993)

    Article  Google Scholar 

  7. Liang, X., Si, J.: Global Exponential Stability of Neural Networks with Globally Lipschitz Continuous Activations and Its Application to Linear Variational Inequality Problem. IEEE Trans. Neural Networks 12, 349–359 (2001)

    Article  Google Scholar 

  8. Xia, Y., Wang, J.: A Recurrent Neural Network for Solving Linear Projection Equations. Neural Networks 13, 337–350 (2000)

    Article  Google Scholar 

  9. Xia, Y., Wang, J.: On the Stability of Globally Projected Dynamical Systems. Journal of Optimization Theory and Applications 106, 129–150 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  10. Xia, Y., Leung, H., Wang, J.: A Projection Neural Network and Its Application to Constrained Optimization Problems. IEEE Trans. Circuits Syst. I 49, 447–458 (2002)

    Article  MathSciNet  Google Scholar 

  11. Xia, Y., Feng, G., Wang, J.: A Recurrent Neural Network with Exponential Convergence for Solving Convex Quadratic Program and Linear Piecewise Equations. Neural Networks 17, 1003–1015 (2004)

    Article  MATH  Google Scholar 

  12. Xia, Y., Wang, J.: A Recurrent Neural Network for Nonlinear Convex Optimization Subject to Nonlinear Inequality Constraints. IEEE Trans. Circuits Syst. I 51, 1385–1394 (2004)

    Article  MathSciNet  Google Scholar 

  13. Xia, Y., Wang, J.: Recurrent Neural Networks for Solving Nonlinear Convex Programs with Linear Constraints. IEEE Trans. Neural Networks 16, 379–386 (2005)

    Article  Google Scholar 

  14. Hu, X., Wang, J.: Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network. IEEE Trans. Neural Networks 17, 1487–1499 (2006)

    Article  Google Scholar 

  15. Zhang, S., Constantinides, A.G.: Lagrange Programming Neural Networks. IEEE Trans. Circuits Syst. II 39, 441–452 (1992)

    Article  MATH  Google Scholar 

  16. Huang, Y.: Lagrange-Type Neural Networks for Nonlinear Programming Problems with Inequality Constraints. In: Proc. 44th IEEE Conference on Decision and Control and the European Control Conference, Seville, Spain, Dec. 12–15, 2005, pp. 4129–4133 (2005)

    Google Scholar 

  17. Luenberger, D.G.: Linear and Nonlinear Programming, 2nd edn. Addison-Wesley, Reading (1984)

    MATH  Google Scholar 

  18. Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)

    MATH  Google Scholar 

  19. Slotine, J.J., Li, W.: Applied Nonlinear Control. Prentice Hall, Englewood Cliffs (1991)

    MATH  Google Scholar 

  20. Hu, X., Wang, J.: A Recurrent Neural Network for Solving Nonconvex Optimization Problems. In: Proc. 2006 IEEE International Joint Conference on Neural Networks, Vancouver, Canada, July 16–21, 2006, pp. 8955–8961 (2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Derong Liu Shumin Fei Zengguang Hou Huaguang Zhang Changyin Sun

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Hu, X., Wang, J. (2007). Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4493. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72395-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72395-0_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72394-3

  • Online ISBN: 978-3-540-72395-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics