Abstract
In recent years, several k-winners-take-all (kWTA) neural networks were developed based on a quadratic programming formulation. In particular, a continuous-time kWTA network with a single state variable and its discrete-time counterpart were developed recently. These kWTA networks have proven properties of global convergence and simple architectures. Starting with problem formulations, this paper reviews related existing kWTA networks and extends the existing kWTA networks with piecewise linear activation functions to the ones with high-gain activation functions. The paper then presents experimental results of the continuous-time and discrete-time kWTA networks with infinity-gain activation functions. The results show that the kWTA networks are parametrically robust and dimensionally scalable in terms of problem size and convergence rate.
The work described in this paper was supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project no. CUHK417608E).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Majani, E., Erlanson, R., Abu-Mostafa, Y.: On the k-winners-take-all network. In: Touretzky, D.S. (ed.) Advances in Neural Information Processing Systems, vol. 1, pp. 634–642. Morgan-Kaufmann, San Mateo (1989)
Maass, W.: On the computational power of winner-take-all. Neural Comput. 12, 2519–2535 (2000)
Fish, A., Akselrod, D., Yadid-Pecht, O.: High precision image centroid computation via an adaptive k-winner-take-all circuit in conjunction with a dynamic element matching algorithm for star tracking applications. Analog Integrated Circuits and Signal Processing 39, 251–266 (2004)
Marr, D., Poggio, T.: Cooperative computation of stereo disparity. Science 194(4262), 283–287 (1976)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)
Yuille, A., Geiger, D.: Winner-take-all networks. In: The Handbook of Brain Theory and Neural Networks, 2nd edn., pp. 1228–1231. MIT Press, Cambridge (2003)
Pouliquen, P.O., Andreou, A.G., Strohbehn, K.: Winner-takes-all associative memory: a hamming distance vector quantizer. Analog Integrated Circuits and Signal Processing 13, 211–222 (1997)
DeSouza, G.N., Kak, A.C.: Vision for mobile robot navigation: a survey. IEEE Trans. Pattern Analysis and Machine Intelligence 24, 237–267 (2002)
Wolfe, W., Mathis, D., Anderson, C., Rothman, J., Gottler, M., Brady, G., Walker, R., Duane, G., Algghband, G.: K-winner networks. IEEE Trans. Neural Networks 2, 310–315 (1991)
Dempsey, G.L., McVey, E.S.: Circuit implementation of a peak detector neural network. IEEE Trans. Circuits Syst. II 40, 585–591 (1993)
Wang, J.: Analogue winner-take-all neural networks for determining maximum and minimum signals. Int. J. Electron. 77, 355–367 (1994)
Urahama, K., Nagao, T.: K-winners-take-all circuit with o(n) complexity. IEEE Trans. Neural Networks 6(3), 776–778 (1995)
Wu, J.L., Tseng, Y.H.: On a constant-time, low-complexity winner-take-all neural network. IEEE Trans. Computers 44, 601–604 (1995)
Yen, J., Guo, J., Chen, H.: A new k-winners-take-all neural network and its array architecture. IEEE Trans. Neural Networks 9(5), 901–912 (1998)
Sekerkiran, B., Cilingiroglu, U.: A CMOS k-winners-take-all circuit with O(N) complexity. IEEE Trans. Circuits and Systems 46, 1–5 (1999)
Sum, J.P.F., Leung, C.S., Tam, P.K.S., Young, G.H., Kan, W.K., Chan, L.W.: Analysis for a class of winner-take-all model. IEEE Trans. Neural Netw. 10, 64–71 (1999)
Calvert, B.A., Marinov, C.: Another k-winners-take-all analog neural network. IEEE Trans. Neural Netw. 11, 829–838 (2000)
Marinov, C., Calvert, B.: Performance analysis for a k-winners-take-all analog neural network: basic theory. IEEE Trans. Neural Networks 14(4), 766–780 (2003)
Marinov, C.A., Hopfield, J.J.: Stable computational dynamics for a class of circuits with O(N) interconnections capable of KWTA and rank extractions. IEEE Trans. Circuits Syst. I 52, 949–959 (2005)
Liu, S., Wang, J.: A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans. Neural Netw. 17(6), 1500–1510 (2006)
Hu, X., Wang, J.: Design of general projection neural networks for solving monotone linear variational inequalities and linear and quadratic optimization problems. IEEE Tran. Systems, Man and Cybernetics - Part B 37, 1414–1421 (2007)
Liu, Q., Wang, J.: Two k-winners-take-all networks with discontinuous activation functions. Neural Networks 21(2-3), 406–413 (2008)
Hu, X., Wang, J.: An improved dual neural network for solving a class of quadratic programming problems and its k-winners-take-all application. IEEE Trans. Neural Networks 19, 2022–2031 (2008)
Xia, Y., Sun, C.: A novel neural dynamical approach to convex quadratic program and its efficient applications. Neural Networks 22(10), 1463–1470 (2009)
Liu, Q., Cao, J., Liang, J.: A discrete-time recurrent neural network with one neuron for k-winners-take-all operation. In: Yu, W., He, H., Zhang, N. (eds.) ISNN 2009. LNCS, vol. 5551, pp. 272–278. Springer, Heidelberg (2009)
Bazaraa, M.S., Jarvis, J.J., Sherali, H.D.: Linear Programming and Network Flows. Wiley, New York (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wang, J., Guo, Z. (2010). Parametric Sensitivity and Scalability of k-Winners-Take-All Networks with a Single State Variable and Infinity-Gain Activation Functions. In: Zhang, L., Lu, BL., Kwok, J. (eds) Advances in Neural Networks - ISNN 2010. ISNN 2010. Lecture Notes in Computer Science, vol 6063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13278-0_11
Download citation
DOI: https://doi.org/10.1007/978-3-642-13278-0_11
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13277-3
Online ISBN: 978-3-642-13278-0
eBook Packages: Computer ScienceComputer Science (R0)