On randomization of neural networks as a form of post-learning strategy | Soft Computing Skip to main content
Log in

On randomization of neural networks as a form of post-learning strategy

  • Methodologies and Application
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Today artificial neural networks are applied in various fields—engineering, data analysis, robotics. While they represent a successful tool for a variety of relevant applications, mathematically speaking they are still far from being conclusive. In particular, they suffer from being unable to find the best configuration possible during the training process (local minimum problem). In this paper, we focus on this issue and suggest a simple, but effective, post-learning strategy to allow the search for improved set of weights at a relatively small extra computational cost. Therefore, we introduce a novel technique based on analogy with quantum effects occurring in nature as a way to improve (and sometimes overcome) this problem. Several numerical experiments are presented to validate the approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. The reader should note that in this paper we interchangeably use the words quantum, random and noise having in mind the same meaning. The same applies for the terms classical and deterministic.

References

  • Beck F, Eccles JC (1994) Quantum aspects of brain activity and the role of consciousness. How the self controls its brain. Springer, Berlin Heidelberg

    Google Scholar 

  • Bishop CM (1993) Neural networks for pattern recognition. MIT Press, Cambridge

    MATH  Google Scholar 

  • Branke J (1995) Evolutionary algorithms for neural network design and training. In: Proceedings of the first nordic workshop on genetic algorithms and its applications

  • Fujita O (1991) A method for designing the internal representation of neural networks and its application to network synthesis. Neural Netw 4(6):827–837

    Article  Google Scholar 

  • Goldberg A, Schey HM, Schwartz JL (1967) Computer-generated motion pictures of one-dimensional quantum-mechanical transmission and reflection phenomena. Am J Phys 35(3)

  • Haykin S (2009) Neural networks and learning machines, 3rd edn. Pearson Education, Upper Saddle River

  • Herz A, Sulzer B, Khn R, Van Hemmen HL (1988) The Hebb rule: storing static and dynamic objects in an associative neural network. EPL (Europhysics Letters) 7(7):663

    Article  Google Scholar 

  • Hopfield JJ (1987) Learning algorithms and probability distributions in feed-forward and feed-back networks. Proc Natl Acad Sci 84(23):8429–8433

    Article  MathSciNet  Google Scholar 

  • Hornik K (1991) Approximation capabilities of multilayer feedforward networks. Neural Netw 4(2):251–257

    Article  MathSciNet  Google Scholar 

  • Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1988) Optimization by simulated annealing, neurocomputing: foundations of research. MIT Press, Cambridge

    MATH  Google Scholar 

  • McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

  • Metropolis N, Rosenbluth AW, Rosenbluth MN, Teller AH, Teller E (1953) Equation of state calculations by fast computing machines. J Chem Phys 21(6):1087–1092

    Article  Google Scholar 

  • Penrose R (1999) The emperors new mind: concerning computers. Brains and the laws of physics. Oxford University Press, Oxford

    Google Scholar 

  • Pratama M, Anavatti SG, Lughofer E (2014a) GENEFIS: toward an effective localist network. IEEE Trans Fuzzy Syst 22(3):547–562

  • Pratama M, Anavatti SG, Angelov PP, Lughofer E (2014b) PANFIS: a novel incremental learning machine. IEEE Trans Neural Netw Learn Syst 25(1):55–68

  • Rosenblatt F (1958) The Perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 65(6):386

    Article  Google Scholar 

  • Schaul T, Zhang S, LeCun Y (2012) No more pesky learning rates. arXiv:1206.1106

  • Specht DF (1990) Probabilistic neural networks. Neural Netw 3(1):109–118

    Article  Google Scholar 

  • Werbos PJ (1988) Generalization of backpropagation with application to a recurrent gas market model. Neural Netw 1(4):339–356

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to K. G. Kapanova.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by V. Loia.

This work has been supported by the project EC AComIn (FP7-REGPOT-20122013-1), and by the Bulgarian Science Fund under Grant DFNI I02/20.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kapanova, K.G., Dimov, I. & Sellier, J.M. On randomization of neural networks as a form of post-learning strategy. Soft Comput 21, 2385–2393 (2017). https://doi.org/10.1007/s00500-015-1949-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-015-1949-1

Keywords

Navigation