Abstract
Recurrent neural networks are powerful learning machines capable of processing sequences. A recent extension of these machines can conveniently be used to process also general data structures like trees and graphs, which opens the doors to a number of new very interesting applications previously unexplored.
In this paper, we show that when the problem of learning is restricted to purely symbolic data structures, like trees, the continuous representation developed after learning can also be given a symbolic interpretation. In particular, we show that a proper quantization of the neuron activation trajectory makes it possible to induce tree automata. We present preliminary experiments for small-size problems that, however, are very promising, especially when considering that this methodology is very robust with respect to accidental or malicious corruption of the learning set.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
M. Boden, “Horses of a different colour?,” in Artificial Intelligence and Neural Networks (V. Honavar and L. Uhr, eds.), pp. 3–19, Academic Press, 1994.
D. Angluin and C. Smith, “Inductive inference: Theory and methods,” Computing Surveys, vol. 15, no. 3, pp. 237–269, 1983.
K.-S. Fu and T. Booth, “Grammatical inference: Introduction and survey — part i,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 5, pp. 95–111, January 1975.
K.-S. Fu and T. Booth, “Grammatical inference: Introduction and survey — part ii,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 5, pp. 409–423, May 1975.
R. Watrous and G. Kuhn, “Induction of finite-state languages using second-order recurrent networks,” Neural Computation, vol. 4, no. 3, pp. 406–414, 1992.
C. Giles, C. Miller, D. Chen, G. Sun, H. Chen, and Y. Lee, “Learning and extracting finite state automata with second-order recurrent neural networks,” Neural Computation, vol. 4, no. 3, pp. 393–405, 1992.
P. Frasconi, M. Gori, M. Maggini, and G. Soda, “Representation of finite state automata in recurrent radial basis function networks,” Machine Learning, vol. 23, pp. 5–32, 1996.
C. Omlin and C. Giles, “Constructing deterministic finite-state automata in recurrent neural networks,” Journal of the ACM, vol. 43, no. 6, pp. 937–972, 1996.
A. Sperduti and T. Starita, “Supervised neural networks for classification of structures,” IEEE Transactions on Neural Networks. to appear.
M. Arbib and Y. Given'on, “Algebra automata i: Parallel programming as a prolegomena to the categorical approach,” Information and Control, vol. 12, pp. 331–345, 1968.
C. Giles and T. Maxwell, “Learning, invariance, and generalization in high-order neural networks,” Applied Optics, vol. 26, no. 23, p. 4972, 1987.
C. Miller and C. Giles, “Experimental comparison of the effect of order in recurrent neural networks,” Int. Journal of Pattern Recognition and Artificial Intelligence, 1993. Special Issue on Applications of Neural Networks to Pattern Recognition (I. Guyon Ed.).
E. Sontag and H. Sussman, “Backpropagation separates when perceptrons do,” in International Joint Conference on Neural Networks, vol. 1, (Washington DC), pp. 639–642, IEEE Press, June 1989.
J. Thatcher, “Tree automata: An informal survey,“ in Current Trends in the Theory of Computing (A. Aho, ed.), pp. 143–172, Prentice-Hall, Inc.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Frasconi, P., Gori, M., Maggini, M., Martinelli, E., Soda, G. (1997). Inductive inference of tree automata by recursive neural networks. In: Lenzerini, M. (eds) AI*IA 97: Advances in Artificial Intelligence. AI*IA 1997. Lecture Notes in Computer Science, vol 1321. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63576-9_94
Download citation
DOI: https://doi.org/10.1007/3-540-63576-9_94
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63576-5
Online ISBN: 978-3-540-69601-8
eBook Packages: Springer Book Archive