Abstract
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty, and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exists where the more mind changes the learner is willing to accept, the lesser the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Ambainis, A., Jain, S., Sharma, A.: Ordinal Mind Change Complexity of Language Identification. In: Ben-David, S. (ed.) EuroCOLT 1997. LNCS, vol. 1208, pp. 301–315. Springer, Heidelberg (1997)
Angluin, D.: Inductive inference of formal languages from positive data. Information and Control 45(2), 117–135 (1980)
Case, J., Jain, S., Sharma, A.: Vacillatory Learning of Nearly Minimal Size Grammars. Journal of Computer and System Sciences 49, 189–207 (1994)
Case, J., Jain, S., Lange, S., Zeugmann, T.: Incremental Concept Learning for Bounded Data Mining. Information and Computation 152(1), 74–110 (1999)
Freivalds, R., Kinber, E.B., Wiehagen, R.: On the Power of Inductive Inference from Good Examples. Theoretical Computer Science 110(1), 131–144 (1993)
Freivalds, R., Kinber, E.B., Smith, C.H.: On the Impact of Forgetting on Learning Machines. In: Pitt, L. (ed.) Proc. of the 6th Annual Conf. of Comput. Learning Theory, pp. 165–174. ACM Press, New York (1993)
Freivalds, R., Smith, C.H.: On the role of procrastination for machine learning. Information and Computation 107(2), 237–271 (1993)
Gold, E.M.: Language identification in the limit. Inform. and Control 10 (1967)
Jain, S., Lange, S., Nessel, J.: On the learnability of recursively enumerable languages from good examples. Theoretical Computer Science 261(1), 3–29 (2001)
Jain, S., Osherson, D.N., Royer, J.S., Sharma, A.: Systems that learn: An Introduction to Learning Theory, 2nd edn. The MIT Press, Cambridge (1999)
Jain, S., Sharma, A.: On monotonic strategies for learning r.e. languages. In: Arikawa, S., Jantke, K.P. (eds.) AII 1994 and ALT 1994. Jain, S., Sharma, A, vol. 872, pp. 349–364. Springer, Heidelberg (1994)
Jain, S., Sharma, A.: Mind change complexity of learning logic programs. Theoretical Computer Science 284(1), 143–160 (2002)
Kinber, E.B., Stephan, F.: Language learning from texts: mind changes, limited memory and monotonicity. Information and Computation 123(2), 224–241 (1995)
Lange, S., Grieser, G.: On the power of incremental learning. Theoretical Computer Science 288(2), 277–307 (2002)
Lange, S., Nessel, J., Wiehagen, R.: Learning Recursive Languages from Good Examples. Annals of Mathematics and Artificial Intelligence 23(1-2), 27–52 (1998)
Lange, S., Zeugmann, T., Kapur, S.: Monotonic and Dual Monotonic Language Learning. Theoretical Computer Science 155(2), 365–410 (1996)
Martin, E., Sharma, A., Stephan, F.: Logic, Learning, and Topology in a Common Framework. In: Cesa-Bianchi, N., Numao, M., Reischuk, R. (eds.) ALT 2002. LNCS (LNAI), vol. 2533, pp. 248–262. Springer, Heidelberg (2002)
Martin, E., Sharma, A., Stephan, F.: On Ordinal VC-Dimension and Some Notions of Complexity. In: Gavaldá, R., Jantke, K.P., Takimoto, E. (eds.) ALT 2003. LNCS (LNAI), vol. 2842, pp. 54–68. Springer, Heidelberg (2003)
Osherson, D., Stob, M., Weinstein, S.: Systems that learn. MIT Press, Cambridge (1986)
Sharma, A., Stephan, F., Ventsov, Y.: Generalized Notions of Mind Change Complexity. In: Freund, Y., Shapire, R. (eds.) Proc. of the 10th Annual Conf. of Comput. Learning, pp. 96–108. ACM Press, New York (1997)
Zeugmann, T., Lange, S., Kapur, S.: Characterizations of Monotonic and Dual Monotonic Language Learning. Inform and Comput. 120(2), 155–173 (1995)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Martin, E., Sharma, A., Stephan, F. (2004). On the Data Consumption Benefits of Accepting Increased Uncertainty. In: Ben-David, S., Case, J., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 2004. Lecture Notes in Computer Science(), vol 3244. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30215-5_8
Download citation
DOI: https://doi.org/10.1007/978-3-540-30215-5_8
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23356-5
Online ISBN: 978-3-540-30215-5
eBook Packages: Springer Book Archive