On the Role of Update Constraints and Text-Types in Iterative Learning | SpringerLink
Skip to main content

On the Role of Update Constraints and Text-Types in Iterative Learning

  • Conference paper
Algorithmic Learning Theory (ALT 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8776))

Included in the following conference series:

  • 1374 Accesses

Abstract

The present work investigates the relationship of iterative learning with other learning criteria such as decisiveness, caution, reliability, non-U-shapedness, monotonicity, strong monotonicity and conservativeness. Building on the result of Case and Moelius that iterative learners can be made non-U-shaped, we show that they also can be made cautious and decisive. Furthermore, we obtain various special results with respect to one-one texts, fat texts and one-one hypothesis spaces.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45, 117–135 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  2. Blum, L., Blum, M.: Toward a mathematical theory of inductive inference. Information and Control 28, 125–155 (1975)

    Article  MathSciNet  MATH  Google Scholar 

  3. Baliga, G., Case, J., Merkle, W., Stephan, F., Wiehagen, R.: When unlearning helps. Information and Computation 206, 694–709 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  4. Case, J.: Periodicity in generations of automata. Mathematical Systems Theory 8, 15–32 (1974)

    Article  MathSciNet  MATH  Google Scholar 

  5. Case, J.: Infinitary self-reference in learning theory. Journal of Experimental and Theoretical Artificial Intelligence 6, 3–16 (1994)

    Article  MATH  Google Scholar 

  6. Case, J.: The power of vacillation in language learning. SIAM Journal on Computing 28, 1941–1969 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  7. Case, J., Kötzing, T.: Strongly non-U-shaped learning results by general techniques. In: Proceedings of COLT (Conference on Computational Learning Theory), pp. 181–193 (2010)

    Google Scholar 

  8. Case, J., Lynes, C.: Machine inductive inference and language identification. In: Proceedings of ICALP (International Colloquium on Automata, Languages and Programming), pp. 107–115 (1982)

    Google Scholar 

  9. Case, J., Moelius III, S.E.: Optimal language learning. In: Freund, Y., Györfi, L., Turán, G., Zeugmann, T. (eds.) ALT 2008. LNCS (LNAI), vol. 5254, pp. 419–433. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  10. Case, J., Moelius, S.E.: U-shaped, iterative, and iterative-with-counter learning. Machine Learning 72, 63–88 (2008)

    Article  Google Scholar 

  11. Fulk, M.: Prudence and other conditions on formal language learning. Information and Computation 85, 1–11 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  12. Mark Gold, E.: Language identification in the limit. Information and Control 10, 447–474 (1967)

    Article  MATH  Google Scholar 

  13. Grieser, G., Lange, S.: Incremental learning of approximations from positive data. Information Processing Letters 89, 37–42 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  14. Jantke, K.-P.: Monotonic and non-monotonic inductive inference of functions and patterns. In: Dix, J., Schmitt, P.H., Jantke, K.P. (eds.) NIL 1990. LNCS, vol. 543, pp. 161–177. Springer, Heidelberg (1991)

    Chapter  Google Scholar 

  15. Jain, S., Moelius, S.E., Zilles, S.: Learning without coding. Theoretical Computer Science 473, 124–148 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  16. Jain, S., Osherson, D., Royer, J., Sharma, A.: Systems that Learn: An Introduction to Learning Theory, 2nd edn. MIT Press, Cambridge (1999)

    Google Scholar 

  17. Kötzing, T.: Abstraction and Complexity in Computational Learning in the Limit. PhD thesis, University of Delaware (2009), http://pqdtopen.proquest.com/#viewpdf?dispub=3373055

  18. Kötzing, T.: A Solution to Wiehagen’s Thesis. In: Symposium on Theoretical Aspects of Computer Science (STACS 2014), pp. 494–505 (2014)

    Google Scholar 

  19. Lange, S., Grieser, G.: On the power of incremental learning. Theoretical Computer Science 288, 277–307 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  20. Lange, S., Grieser, G.: Variants of iterative learning. Theoretical Computer Science 292, 359–376 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  21. Lange, S., Zeugmann, T.: Monotonic versus non-monotonic language learning. In: Brewka, G., Jantke, K.P., Schmitt, P.H. (eds.) NIL 1991. LNCS, vol. 659, pp. 254–269. Springer, Heidelberg (1993)

    Chapter  Google Scholar 

  22. Lange, S., Zeugmann, T.: Incremental learning from positive data. Journal of Computer and System Sciences 53, 88–103 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  23. Lange, S., Zeugmann, T., Zilles, S.: Learning indexed families of recursive languages from positive data: a survey. Theoretical Computer Science 397, 194–232 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  24. Osherson, D., Stob, M., Weinstein, S.: Learning strategies. Information and Control 53, 32–51 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  25. Osherson, D., Stob, M., Weinstein, S.: Systems that Learn: An Introduction to Learning Theory for Cognitive and Computer Scientists. MIT Press, Cambridge (1986)

    Google Scholar 

  26. Osherson, D., Weinstein, S.: Criteria of language learning. Information and Control 52, 123–138 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  27. Royer, J., Case, J.: Subrecursive Programming Systems: Complexity and Succinctness. Research monograph in Progress in Theoretical Computer Science. Birkhäuser, Basel (1994)

    Book  MATH  Google Scholar 

  28. Rogers, H.: Theory of Recursive Functions and Effective Computability. McGraw Hill, New York (1967); Reprinted by MIT Press, Cambridge (1987)

    Google Scholar 

  29. Wiehagen, R.: A thesis in inductive inference. Nonmonotonic and Inductive Logic. In: Dix, J., Schmitt, P.H., Jantke, K.P. (eds.) NIL 1990. LNCS, vol. 543, pp. 184–207. Springer, Heidelberg (1991)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Jain, S., Kötzing, T., Ma, J., Stephan, F. (2014). On the Role of Update Constraints and Text-Types in Iterative Learning. In: Auer, P., Clark, A., Zeugmann, T., Zilles, S. (eds) Algorithmic Learning Theory. ALT 2014. Lecture Notes in Computer Science(), vol 8776. Springer, Cham. https://doi.org/10.1007/978-3-319-11662-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11662-4_5

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11661-7

  • Online ISBN: 978-3-319-11662-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics