Abstract
We refine the analysis of binary-state neural networks with \(\alpha \) extra analog neurons (\(\alpha \)ANNs). For rational weights, it has been known that online 1ANNs accept context-sensitive languages including examples of non-context-free languages, while offline 3ANNs are Turing complete. We now prove that the deterministic (context-free) language containing the words of n zeros followed by n ones, cannot be recognized offline by any 1ANN with real weights. Hence, the offline 1ANNs are not Turing complete. On the other hand, we show that any deterministic language can be accepted by a 2ANN with rational weights. Thus, two extra analog units can count to any number which is not the case of one analog neuron.
Research was done with institutional support RVO: 67985807 and partially supported by the grant of the Czech Science Foundation No. 19-05704S.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Alon, N., Dewdney, A.K., Ott, T.J.: Efficient simulation of finite automata by neural nets. J. ACM 38(2), 495–514 (1991)
Balcázar, J.L., Gavaldà, R., Siegelmann, H.T.: Computational power of neural networks: A characterization in terms of Kolmogorov complexity. IEEE Trans. Inf. Theory 43(4), 1175–1183 (1997)
Horne, B.G., Hush, D.R.: Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Netw. 9(2), 243–252 (1996)
Indyk, P.: Optimal simulation of automata by neural nets. In: Mayr, E.W., Puech, C. (eds.) STACS 1995. LNCS, vol. 900, pp. 337–348. Springer, Heidelberg (1995). https://doi.org/10.1007/3-540-59042-0_85
Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)
Koiran, P.: A family of universal recurrent networks. Theoret. Comput. Sci. 168(2), 473–480 (1996)
Minsky, M.: Computations: Finite and Infinite Machines. Prentice-Hall, Englewood Cliffs (1967)
Orponen, P.: Computing with truly asynchronous threshold logic networks. Theoret. Comput. Sci. 174(1–2), 123–136 (1997)
Schmidhuber, J.: Deep learning in neural networks: An overview. Neural Netw. 61, 85–117 (2015)
Siegelmann, H.T.: Neural Networks and Analog Computation: Beyond the Turing Limit. Birkhäuser, Boston (1999)
Siegelmann, H.T., Sontag, E.D.: Analog computation via neural networks. Theoret. Comput. Sci. 131(2), 331–360 (1994)
Siegelmann, H.T., Sontag, E.D.: On the computational power of neural nets. J. Comput. Syst. Sci. 50(1), 132–150 (1995)
Šíma, J.: Analog stable simulation of discrete neural networks. Neural Netw. World 7(6), 679–686 (1997)
Šíma, J.: Energy complexity of recurrent neural networks. Neural Comput. 26(5), 953–973 (2014)
Šíma, J.: The power of extra analog neuron. In: Dediu, A.-H., Lozano, M., Martín-Vide, C. (eds.) TPNC 2014. LNCS, vol. 8890, pp. 243–254. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13749-0_21
Šíma, J.: Three analog neurons are turing universal. In: Fagan, D., Martín-Vide, C., O’Neill, M., Vega-Rodríguez, M.A. (eds.) TPNC 2018. LNCS, vol. 11324, pp. 460–472. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-04070-3_36
Šíma, J.: Subrecursive neural networks. Neural Netw. 116, 208–223 (2019)
Šíma, J., Orponen, P.: General-purpose computation with neural networks: A survey of complexity theoretic results. Neural Comput. 15(12), 2727–2778 (2003)
Šíma, J., Savický, P.: Quasi-periodic \(\beta \)-expansions and cut languages. Theoret. Comput. Sci. 720, 1–23 (2018)
Šíma, J., Wiedermann, J.: Theory of neuromata. J. ACM 45(1), 155–178 (1998)
Šorel, M., Šíma, J.: Robust RBF finite automata. Neurocomputing 62, 93–110 (2004)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Šíma, J. (2019). Counting with Analog Neurons. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. ICANN 2019. Lecture Notes in Computer Science(), vol 11727. Springer, Cham. https://doi.org/10.1007/978-3-030-30487-4_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-30487-4_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30486-7
Online ISBN: 978-3-030-30487-4
eBook Packages: Computer ScienceComputer Science (R0)