Modelling and Evaluating Restricted ESNs | SpringerLink
Skip to main content

Modelling and Evaluating Restricted ESNs

  • Conference paper
  • First Online:
Unconventional Computation and Natural Computation (UCNC 2023)

Abstract

We investigate various methods of combining Echo State Networks (ESNs), including a method that we dub Restricted ESNs. We provide a notation for describing Restricted ESNs, and use it to benchmark a standard ESN against restricted ones. We investigate two methods to keep the weight matrix density consistent when comparing a Restricted ESN to a standard one, which we call “overall consistency” and “patch consistency”. We benchmark restricted ESNs on NARMA10 and the sunspot prediction benchmark, and find that restricted ESNs perform similarly to standard ones. We present some application scenarios in which restricted ESNs may offer advantages over standard ESNs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 7435
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 9294
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://machinelearningmastery.com/time-series-datasets-for-machine-learning/.

  2. 2.

    The grid search is modified to split the range of f into 10 and use that as the initial step, and then split the range between the optimal value and its neighbour into 10 for the secondary step.

References

  1. Atiya, A.F., Parlos, A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE TNN 11(3), 697–709 (2000)

    Google Scholar 

  2. Butcher, J.B., Verstraeten, D., Schrauwen, B., Haycock, P.W.: Extending reservoir computing with random static projections. In: ESANN 2010, pp. 303–308 (2010)

    Google Scholar 

  3. Caluwaerts, K., D’Haene, M., Verstraeten, D., Schrauwen, B.: Locomotion without a brain: physical reservoir computing in tensegrity structures. Artif. Life 19(1), 35–66 (2013)

    Article  Google Scholar 

  4. Canaday, D., Pomerance, A., Gauthier, D.J.: Model-free control of dynamical systems with deep reservoir computing. J. Phys. Complex. 2(3), 035025 (2021)

    Article  Google Scholar 

  5. Dale, M.: Neuroevolution of hierarchical reservoir computers. In: GECCO 2018, pp. 410–417. ACM (2018)

    Google Scholar 

  6. Dale, M., Miller, J.F., Stepney, S., Trefzer, M.A.: Evolving carbon nanotube reservoir computers. In: Amos, M., Condon, A. (eds.) UCNC 2016. LNCS, vol. 9726, pp. 49–61. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41312-9_5

    Chapter  Google Scholar 

  7. Dale, M., O’Keefe, S., Sebald, A., Stepney, S., Trefzer, M.A.: Computing with magnetic thin films: using film geometry to improve dynamics. In: Kostitsyna, I., Orponen, P. (eds.) UCNC 2021. LNCS, vol. 12984, pp. 19–34. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87993-8_2

    Chapter  Google Scholar 

  8. Deng, Z., Zhang, Y.: Collective behavior of a small-world recurrent neural system with scale-free distribution. IEEE TNN 18(5), 1364–1375 (2007)

    Google Scholar 

  9. Fernando, C., Sojakka, S.: Pattern recognition in a bucket. In: Banzhaf, W., Ziegler, J., Christaller, T., Dittrich, P., Kim, J.T. (eds.) ECAL 2003. LNCS (LNAI), vol. 2801, pp. 588–597. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-39432-7_63

    Chapter  Google Scholar 

  10. Gallicchio, C., Micheli, A.: Architectural and Markovian factors of echo state networks. Neural Netw. 24(5), 440–456 (2011)

    Article  Google Scholar 

  11. Gallicchio, C., Micheli, A.: Echo state property of deep reservoir computing networks. Cognit. Comput. 9(3), 337–350 (2017)

    Article  Google Scholar 

  12. Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268, 87–99 (2017)

    Article  Google Scholar 

  13. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks - with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148(34), 13 (2001)

    Google Scholar 

  14. Jaeger, H.: Discovering multiscale dynamical features with hierarchical echo state networks. Technical report TR-10, Jacobs University Bremen (2007)

    Google Scholar 

  15. Jaeger, H., Maass, W., Principe, J.: Special issue on echo state networks and liquid state machines. Neural Netw. 20(3), 287–289 (2007)

    Article  Google Scholar 

  16. Jarvis, S., Rotter, S., Egert, U.: Extending stability through hierarchical clusters in echo state networks. Front. Neuroinform. 4 (2010)

    Google Scholar 

  17. Lukoševičius, M.: A practical guide to applying echo state networks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 659–686. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-35289-8_36

    Chapter  Google Scholar 

  18. Ma, Q., Chen, E., Lin, Z., Yan, J., Yu, Z., Ng, W.W.Y.: Convolutional multitimescale echo state network. IEEE Trans. Cybern. 51(3), 1613–1625 (2021)

    Article  Google Scholar 

  19. Ma, Q., Shen, L., Cottrell, G.W.: Deep-ESN: a multiple projection-encoding hierarchical reservoir computing framework. arXiv:1711.05255 [cs.LG] (2017)

  20. Ma, Q., Shen, L., Zhuang, W., Chen, J.: Decouple adversarial capacities with dual-reservoir network. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, E.-S.M. (eds.) ICONIP 2017. LNCS, vol. 10638, pp. 475–483. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-70139-4_48

    Chapter  Google Scholar 

  21. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states. Neural Comput. 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  22. Malik, Z.K., Hussain, A., Wu, Q.J.: Multilayered echo state machine: a novel architecture and algorithm. IEEE Trans. Cybern. 47(4), 946–959 (2017)

    Article  Google Scholar 

  23. Rodan, A., Tino, P.: Minimum complexity echo state network. IEEE TNN 22(1), 131–144 (2011)

    Google Scholar 

  24. Rodriguez, N., Izquierdo, E., Ahn, Y.Y.: Optimal modularity and memory capacity of neural reservoirs. Netw. Neurosci. 3(2), 551–566 (2019)

    Article  Google Scholar 

  25. Schwenker, F., Labib, A.: Echo state networks and neural network ensembles to predict sunspots activity. In: ESANN 2009 (2009)

    Google Scholar 

  26. Stepney, S.: Non-instantaneous information transfer in physical reservoir computing. In: Kostitsyna, I., Orponen, P. (eds.) UCNC 2021. LNCS, vol. 12984, pp. 164–176. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-87993-8_11

    Chapter  Google Scholar 

  27. Triefenbach, F., Jalal, A., Schrauwen, B., Martens, J.P.: Phoneme recognition with large hierarchical reservoirs. Adv. Neural. Inf. Process. Syst. 23, 2307–2315 (2010)

    Google Scholar 

  28. Triefenbach, F., Jalalvand, A., Demuynck, K., Martens, J.P.: Acoustic modeling with hierarchical reservoirs. IEEE TASLP 21(11), 2439–2450 (2013)

    Google Scholar 

  29. Xue, Y., Yang, L., Haykin, S.: Decoupled echo state networks with lateral inhibition. Neural Netw. 20(3), 365–376 (2007)

    Article  MATH  Google Scholar 

  30. Yule, G.U.: On a method of investigating periodicities in disturbed series, with special reference to Wolfer’s sunspot numbers. Phil. Trans. Roy. Soc. A 226(636–646), 267–298 (1927)

    MATH  Google Scholar 

Download references

Acknowledgement

This work was made possible by PhD studentship funding from the Computer Science Department of the University of York.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Susan Stepney .

Editor information

Editors and Affiliations

Appendices

A Calculating \(D_W\) for the Overall-Consistent Case

Given an ESN with N nodes and an average density \(0 \le D \le 1\), we wish to restrict that ESN to have n subreservoirs of equal size; we assume n divides N. We set the density within the subreservoirs, \(D_W\), to be greater than the density outside the subreservoirs by a factor of f, that is, \(D_W = f D_B\).

In a restricted ESN with n subreservoirs, each of size N/n, there are n regions in the edge matrix \({\textbf {W}}\) of size \(({N}/{n})^2\) with density \(D_W\), and a further \(n^2-n\) regions also of size \(({N}/{n})^2\) with density \(D_B\).

Hence the average density D of such a restricted ESN is:

$$\begin{aligned} D = \frac{n D_W + (n^2 - n) D_B}{n^2} \end{aligned}$$
(6)

Substituting \(D_W = f D_B\), and rearranging to get an expression for \(D_B\) in terms of D, we get:

$$\begin{aligned} D_B = \frac{Dn}{f + n - 1} \end{aligned}$$
(7)

Once \(D_B\) is known, we also have \(D_W\) from \(D_W = f D_B\).

B Optimising f

In order to find the best possible restricted ESN within our constraints, we optimise over the parameter f. However, we must somehow limit our search space.

In the restricted ESN, we want \(D_B\) to be strictly less than \(D_W\) (less dense connections than subreservoirs); therefore, \(f > 1\).

To find an upper bound, we assume that every subreservoir is connected to every other subreservoir, that is, every connection weight matrix \({\textbf {B}}_{i,j}\) has at least one entry. This requires \(D_B \ge (n/N)^2\). (In the experiments, the weight matrices are generated probabilistically, so when close to this density limit, it may be the case that there is not an edge between all subreservoirs.)

Rearranging Eq. 7 gives:

$$\begin{aligned} f = \frac{D n}{D_B} - n + 1 \end{aligned}$$
(8)

The lower limit on \(D_B\) gives an upper limit on f:

$$\begin{aligned} f \le \frac{N^2 D}{n} - n + 1 \end{aligned}$$
(9)

We also have an upper limit on the derived density, \(D_W \le 1\) (equality implies there are no zero elements in the relevant weight matrix). Substituting for \(D_W\) in Eq. 7 gives:

$$\begin{aligned} \frac{fDn}{f + n - 1} = D_W \le 1 \end{aligned}$$
(10)

Rearranging gives another upper limit on f:

$$\begin{aligned} f \le \frac{n-1}{Dn-1} \end{aligned}$$
(11)

Hence we have the upper and lower bounds on f:

$$\begin{aligned} 1 < f \le \text{ min } \left( \frac{N^2 D}{n} - n + 1, \frac{n-1}{Dn-1} \right) \end{aligned}$$
(12)

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wringe, C., Stepney, S., Trefzer, M.A. (2023). Modelling and Evaluating Restricted ESNs. In: Genova, D., Kari, J. (eds) Unconventional Computation and Natural Computation. UCNC 2023. Lecture Notes in Computer Science, vol 14003. Springer, Cham. https://doi.org/10.1007/978-3-031-34034-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-34034-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-34033-8

  • Online ISBN: 978-3-031-34034-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics