Abstract
In the paper, a recurrent neural network based on an augmented Lagrangian function is proposed for seeking local minima of nonconvex optimization problems with inequality constraints. First, each equilibrium point of the neural network corresponds to a Karush-Kuhn-Tucker (KKT) point of the problem. Second, by appropriately choosing a control parameter, the neural network is asymptotically stable at those local minima satisfying some mild conditions. The latter property of the neural network is ensured by the convexification capability of the augmented Lagrangian function. The proposed scheme is inspired by many existing neural networks in the literature and can be regarded as an extension or improved version of them. A simulation example is discussed to illustrate the results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hopfield, J.J., Tank, D.W.: ‘Neural’ Computation of Decisions in Optimization Problems. Biol. Cybern. 52, 141–152 (1985)
Tank, D.W., Hopfield, J.J.: Simple Neural Optimization Networks: An A/D Converter, Signal Decision Circuit, and a Linear Programming Circuit. IEEE Trans. Circuits Syst. II 33, 533–541 (1986)
Rodríguez-Vázquez, A., Domínguez-Castro, R., Rueda, A., Huertas, J.L., Sánchez-Sinencio, E.: Nonlinear Switched-Capacitor Neural Networks for Optimization Problems. IEEE Trans. Circuits Syst. 37, 384–397 (1990)
Wang, J.: Analysis and Design of a Recurrent Neural Network for Linear Programming. IEEE Trans. Circuits Syst. I 40, 613–618 (1993)
Wang, J.: A Deterministic Annealing Neural Network for Convex Programming. Neural Networks 7, 629–641 (1994)
Bouzerdoum, A., Pattison, T.R.: Neural Network for Quadratic Optimization with Bound Constraints. IEEE Trans. Neural Networks 4, 293–304 (1993)
Liang, X., Si, J.: Global Exponential Stability of Neural Networks with Globally Lipschitz Continuous Activations and Its Application to Linear Variational Inequality Problem. IEEE Trans. Neural Networks 12, 349–359 (2001)
Xia, Y., Wang, J.: A Recurrent Neural Network for Solving Linear Projection Equations. Neural Networks 13, 337–350 (2000)
Xia, Y., Wang, J.: On the Stability of Globally Projected Dynamical Systems. Journal of Optimization Theory and Applications 106, 129–150 (2000)
Xia, Y., Leung, H., Wang, J.: A Projection Neural Network and Its Application to Constrained Optimization Problems. IEEE Trans. Circuits Syst. I 49, 447–458 (2002)
Xia, Y., Feng, G., Wang, J.: A Recurrent Neural Network with Exponential Convergence for Solving Convex Quadratic Program and Linear Piecewise Equations. Neural Networks 17, 1003–1015 (2004)
Xia, Y., Wang, J.: A Recurrent Neural Network for Nonlinear Convex Optimization Subject to Nonlinear Inequality Constraints. IEEE Trans. Circuits Syst. I 51, 1385–1394 (2004)
Xia, Y., Wang, J.: Recurrent Neural Networks for Solving Nonlinear Convex Programs with Linear Constraints. IEEE Trans. Neural Networks 16, 379–386 (2005)
Hu, X., Wang, J.: Solving Pseudomonotone Variational Inequalities and Pseudoconvex Optimization Problems Using the Projection Neural Network. IEEE Trans. Neural Networks 17, 1487–1499 (2006)
Zhang, S., Constantinides, A.G.: Lagrange Programming Neural Networks. IEEE Trans. Circuits Syst. II 39, 441–452 (1992)
Huang, Y.: Lagrange-Type Neural Networks for Nonlinear Programming Problems with Inequality Constraints. In: Proc. 44th IEEE Conference on Decision and Control and the European Control Conference, Seville, Spain, Dec. 12–15, 2005, pp. 4129–4133 (2005)
Luenberger, D.G.: Linear and Nonlinear Programming, 2nd edn. Addison-Wesley, Reading (1984)
Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York (1982)
Slotine, J.J., Li, W.: Applied Nonlinear Control. Prentice Hall, Englewood Cliffs (1991)
Hu, X., Wang, J.: A Recurrent Neural Network for Solving Nonconvex Optimization Problems. In: Proc. 2006 IEEE International Joint Conference on Neural Networks, Vancouver, Canada, July 16–21, 2006, pp. 8955–8961 (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Hu, X., Wang, J. (2007). Convergence of a Recurrent Neural Network for Nonconvex Optimization Based on an Augmented Lagrangian Function. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds) Advances in Neural Networks – ISNN 2007. ISNN 2007. Lecture Notes in Computer Science, vol 4493. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72395-0_25
Download citation
DOI: https://doi.org/10.1007/978-3-540-72395-0_25
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72394-3
Online ISBN: 978-3-540-72395-0
eBook Packages: Computer ScienceComputer Science (R0)