Extracting Finite Structure from Infinite Language | SpringerLink
Skip to main content

Extracting Finite Structure from Infinite Language

  • Conference paper
Research and Development in Intelligent Systems XXI (SGAI 2004)

Abstract

This paper presents a novel connectionist memory-rule based model capable of learning the finite-state properties of an input language from a set of positive examples. The model is based upon an unsupervised recurrent self-organizing map [1] with laterally interconnected neurons. A derivation of functional-equivalence theory [2] is used that allows the model to exploit similarities between the fixture context of previously memorized sequences and the future context of the current input sequence. This bottom-up learning algorithm binds functionally-related neurons together to form states. Results show that the model is able to learn the Reber grammar [3] perfectly from a randomly generated training set and to generalize to sequences beyond the length of those found in the training set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 12578
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. McQueen, T. & Hopgood, A. & Tepper, J. & Allen, T. A Recurrent self-organizing map for Temporal Sequence Processing. In: Proceedings of 4th International Conference in Recent Advances in Soft Computing (RASC2002), Nottingham, 2002

    Google Scholar 

  2. Hopcroft J. & Ullman J. Introduction to Automata Theory, Languages and Computation, vol 1, Addison-Wesley, 1979

    Google Scholar 

  3. Cleeremans A, Schreiber D, McClelland J. Finite State Automata and Simple Recurrent Networks. In: Neural Computation. 1989, Vol 1, pp 372–381

    Article  Google Scholar 

  4. Collier R. An historical overview of natural language processing systems that learn. Artificial Intelligence Review 1994; 8(1)

    Google Scholar 

  5. Chomsky, N. Aspects of the Theory of Syntax. MIT Press, 1965

    Google Scholar 

  6. Gold, E.M. Language Identification in the Limit. Information and Control 1967; 10:447–474

    Article  MATH  Google Scholar 

  7. Homing, J.J. A study of grammatical inference. PhD thesis, Stanford University, California, 1969

    Google Scholar 

  8. Elman, J.L. Finding Structure in Time. Cognitive Science 1990; 14:179–211

    Article  Google Scholar 

  9. Omlin, C. Understanding and Explaining DRN Behaviour. In: Kolen, J. and Kremer S (eds) A Field Guide to Dynamical Recurrent Networks. IEEE Press, New York, 2001, pp 207–227

    Google Scholar 

  10. Kolen, J. Fool’s Gold: Extracting Finite State Machines From Recurrent Network Dynamics. In: Cowan J, Tesauro G and Alspector J (eds) Advances in Neural Information Processing Systems 6. Morgan Kaufmann, San Francisco CA, 1994, pp 501–508

    Google Scholar 

  11. Kohonen T. Self-Organizing Maps, vol 1. Springer-Verlag, Germany, 1995

    Google Scholar 

  12. Baretto, G and Arajo, A. Time in Self-Organizing Map: An Overview of Models. International Journal of Computer Research: Special Edition on Neural Networks: Past, Present and Future 2001; 10(2):139–179

    Google Scholar 

  13. Pinker, S. Words and Rules. Phoenix, London, 2000

    Google Scholar 

  14. Marcus, G. F. Children’s Overregularization and Its Implications for Cognition. In: P. Broeder and J. Murre (eds) Models of Language Acquisition: Inductive and Deductive approaches. Oxford University Press, Oxford, 2000, pp 154–176

    Google Scholar 

  15. Cohen, N.J. and Squire, L.R. Preserved learning and retention of pattern-analyzing skill in amnesia: Dissociation of knowing how and knowing that. Science 1980; 21:207–210

    Article  Google Scholar 

  16. Voegtlin, T. Recursive Self-Organizing Maps. Neural Networks 2002; 15(8–9):979–991

    Article  Google Scholar 

  17. Hopgood, A. A. Intelligent Systems for Engineers and Scientists, 2nd edition, CRC Press LLC, Florida, 2001, pp 195–222

    Google Scholar 

  18. Sharkey N, Sharkey A, Jackson S. Are SRNs sufficient for modelling language acquisition?. In: Broeder P, Murre J. (eds) Models of Language Acquisition: Inductive and Deductive Approaches. Oxford University Press, Oxford, 2000, pp 33–54

    Google Scholar 

  19. Lawrence S, Giles C, Fong S. Natural Language Grammatical Inference with Recurrent Neural Networks. IEEE Transactions on Knowledge and Data Engineering 2000; 12(1): 126–140

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag London Limited

About this paper

Cite this paper

McQueen, T., Hopgood, A.A., Allen, T.J., Tepper, J.A. (2005). Extracting Finite Structure from Infinite Language. In: Bramer, M., Coenen, F., Allen, T. (eds) Research and Development in Intelligent Systems XXI. SGAI 2004. Springer, London. https://doi.org/10.1007/1-84628-102-4_1

Download citation

  • DOI: https://doi.org/10.1007/1-84628-102-4_1

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-85233-907-4

  • Online ISBN: 978-1-84628-102-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics