Abstract
Today artificial neural networks are applied in various fields—engineering, data analysis, robotics. While they represent a successful tool for a variety of relevant applications, mathematically speaking they are still far from being conclusive. In particular, they suffer from being unable to find the best configuration possible during the training process (local minimum problem). In this paper, we focus on this issue and suggest a simple, but effective, post-learning strategy to allow the search for improved set of weights at a relatively small extra computational cost. Therefore, we introduce a novel technique based on analogy with quantum effects occurring in nature as a way to improve (and sometimes overcome) this problem. Several numerical experiments are presented to validate the approach.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
The reader should note that in this paper we interchangeably use the words quantum, random and noise having in mind the same meaning. The same applies for the terms classical and deterministic.
References
Beck F, Eccles JC (1994) Quantum aspects of brain activity and the role of consciousness. How the self controls its brain. Springer, Berlin Heidelberg
Bishop CM (1993) Neural networks for pattern recognition. MIT Press, Cambridge
Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applications
Fujita O (1991) A method for designing the internal representation of neural networks and its application to network synthesis. Neural Netw 4(6):827–837
Goldberg A, Schey HM, Schwartz JL (1967) Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena. Am J Phys 35(3)
Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson Education, Upper Saddle River
Herz A, Sulzer B, Khn R, Van Hemmen HL (1988) The Hebb rule: storing static and dynamic objects in an associative neural network. EPL (Europhysics Letters) 7(7):663
Hopfield JJ (1987) Learning algorithms and probability distributions in feed-forward and feed-back networks. Proc Natl Acad Sci 84(23):8429–8433
Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257
Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1988) Optimization by simulated annealing, neurocomputing: foundations of research. MIT Press, Cambridge
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133
Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092
Penrose R (1999) The emperors new mind: concerning computers. Brains and the laws of physics. Oxford University Press, Oxford
Pratama M, Anavatti SG, Lughofer E (2014a) GENEFIS: toward an effective localist network. IEEE Trans Fuzzy Syst 22(3):547–562
Pratama M, Anavatti SG, Angelov PP, Lughofer E (2014b) PANFIS: a novel incremental learning machine. IEEE Trans Neural Netw Learn Syst 25(1):55–68
Rosenblatt F (1958) The Perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386
Schaul T, Zhang S, LeCun Y (2012) No more pesky learning rates. arXiv:1206.1106
Specht DF (1990) Probabilistic neural networks. Neural Netw 3(1):109–118
Werbos PJ (1988) Generalization of backpropagation with application to a recurrent gas market model. Neural Netw 1(4):339–356
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by V. Loia.
This work has been supported by the project EC AComIn (FP7-REGPOT-20122013-1), and by the Bulgarian Science Fund under Grant DFNI I02/20.
Rights and permissions
About this article
Cite this article
Kapanova, K.G., Dimov, I. & Sellier, J.M. On randomization of neural networks as a form of post-learning strategy. Soft Comput 21, 2385–2393 (2017). https://doi.org/10.1007/s00500-015-1949-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-015-1949-1