Abstract
Blum and Blum (1975) showed that a class \( \mathcal{B} \) of suitable recursive approximations to the halting problem is reliably EX-learnable. These investigations are carried on by showing that \( \mathcal{B} \) is neither in NUM nor robustly EX-learnable. Since the definition of the class \( \mathcal{B} \) is quite natural and does not contain any self-referential coding, \( \mathcal{B} \) serves as an example that the notion of robustness for learning is quite more restrictive than intended.
Moreover, variants of this problem obtained by approximating any given recursively enumerable set A instead of the halting problem K are studied. All corresponding function classes \( \mathcal{U} \)(A) are still EX-inferable but may fail to be reliably EX-learnable, for example if A is non-high and hypersimple. Additionally, it is proved that \( \mathcal{U} \)(A) is neither in NUM nor robustly EX-learnable provided A is part of a recursively inseparable pair, A is simple but not hypersimple or A is neither recursive nor high. These results provide more evidence that there is still some need to find an adequate notion for “naturally learnable function classes.”
Supported by the Deutsche Forschungsgemeinschaft (DFG) under Research Grant no. Am 60/9-2.
Supported by the Grant-in-Aid for Scientific Research in Fundamental Areas from the Japanese Ministry of Education, Science, Sports, and Culture under grant no. 10558047. Part of this work was done while visiting the Laboratoire d’Informatique Algorithmique: Fondements et Applications, Université Paris 7. This author is gratefully indebted to Maurice Nivat for providing financial support and inspiring working conditions.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
A. Ambainis and R. Freivalds. Transformations that preserve learnability. In Proccedings of the 7th International Workshop on Algorithmic Learning Theory (ALT’96) (S. Arikawa and A. Sharma, Eds.) Lecture Notes in Artificial Intelligence Vol. 1160, pages 299–311, Springer-Verlag, Berlin, 1996.
D. Angluin. Inductive inference of formal languages from positive data. Information and Control, 45:117–135, 1980.
D. Angluin and C.H. Smith. A survey of inductive inference: Theory and methods. Computing Surveys, 15:237–289, 1983.
J. Bārzdins. Prognostication of automata and functions. Information Processing’ 71, (1) 81–84. Edited by C. P. Freiman, North-Holland, Amsterdam, 1971.
M. Blum. A machine-independent theory of the complexity of recursive functions. Journal of the Association for Computing Machinery, 14:322–336.
L. Blum and M. Blum. Towards a mathematical theory of inductive inference. Information and Control, 28:125–155, 1975.
J. Case, S. Jain, M. Ott, A. Sharma and F. Stephan. Robust learning aided by context. In Proceedings of 11th Annual Conference on Computational Learning Theory (COLT’98), pages 44–55, ACM Press, New York, 1998.
J. Case and C.H. Smith. Comparison of identification criteria for machine inductive inference. Theoretical Computer Science 25:193–220, 1983.
R. Freivalds. Inductive inference of recursive functions: Qualitative theory. In Baltic Computer Science (J. Bārzdiņš and D. Bjørner, Eds.), Lecture Notes in Computer Science Vol. 502, pages 77–110. Springer-Verlag, Berlin, 1991.
M. Fulk. Robust separations in inductive inference. In Proceedings of the 31st Annual Symposium on Foundations of Computer Science (FOCS), pages 405–410, St. Louis, Missouri, 1990.
M.E. Gold. Language identification in the limit. Information and Control, 10:447–474, 1967.
J. Grabowski. Starke Erkennung. In Strukturerkennung diskreter kybernetischer Systeme, (R. Linder, H. Thiele, Eds.), Seminarberichte der Sektion Mathematik der Humboldt-Universität Berlin Vol. 82, pages 168–184, 1986.
J.P. Helm. On effectively computable operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML), 17:231–244, 1971.
S. Jain. Robust Behaviourally Correct Learning. Technical Report TRA6/98 at the DISCS, National University of Singapore, 1998.
S. Jain, D. Osherson, J.S. Royer and A. Sharma. Systems That Learn: An Introduction to Learning Theory. MIT-Press, Boston, MA., 1999.
S. Jain, C. Smith and R. Wiehagen. On the power of learning robustly. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), pages 187–197, ACM Press, New York, 1998.
E.B. Kinber and T. Zeugmann. Inductive inference of almost everywhere correct programs by reliably working strategies. Journal of Information Processing and Cybernetics, 21:91–100, 1985.
E.B. Kinber and T. Zeugmann. One-sided error probabilistic inductive inference and reliable frequency identification. Information and Computation, 92:253–284, 1991.
R. Klette and R. Wiehagen. Research in the theory of inductive inference by GDR mathematicians — A survey. Information Sciences, 22:149–169, 1980.
S. Kurtz and C.H. Smith. On the role of search for learning. In Proceedings of the 2nd Annual Workshop on Computational Learning Theory (R. Rivest, D. Haussler and M. Warmuth, Eds.) pages 303–311, Morgan Kaufman, 1989.
E. Minicozzi. Some natural properties of strong-identification in inductive inference. Theoretical Computer Science, 2:345–360, 1976.
P. Odifreddi. Classical Recursion Theory. North-Holland, Amsterdam, 1989.
M. Ott and F. Stephan. Avoiding coding tricks by hyperrobust learning. Proceedings of the Fourth European Conference on Computational Learning Theory (EuroCOLT) (P. Fischer and H.U. Simon, Eds.) Lecture Notes in Artificial Intelligence Vol. 1572, pages 183–197, Springer-Verlag, Berlin, 1999.
H. Jr. Rogers. Theory of Recursive Functions and Effective Computability. McGraw-Hill, New York, 1967.
F. Stephan and T. Zeugmann. On the Uniform Learnability of Approximations to Non-Recursive Functions. DOI Technical Report DOI-TR-166, Department of Informatics, Kyushu University, July 1999.
T. Zeugmann. A-posteriori characterizations in inductive inference of recursive functions. Journal of Information Processing and Cybernetics (EIK), 19:559–594, 1983.
T. Zeugmann. On the nonboundability of total effective operators. Zeitschrift für mathematische Logik und Grundlagen der Mathematik (ZML), 30:169–172, 1984.
T. Zeugmann. On Bārzdiņš’ conjecture. In Proceedings of the International Workshop on Analogical and Inductive Inference (AII’86) (K.P. Jantke, Ed.), Lecture Notes in Computer Science Vol. 265, pages 220–227. Springer-Verlag, Berlin, 1986.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Stephan, F., Zeugmann, T. (1999). On the Uniform Learnability of Approximations to Non-recursive Functions. In: Watanabe, O., Yokomori, T. (eds) Algorithmic Learning Theory. ALT 1999. Lecture Notes in Computer Science(), vol 1720. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46769-6_23
Download citation
DOI: https://doi.org/10.1007/3-540-46769-6_23
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66748-3
Online ISBN: 978-3-540-46769-4
eBook Packages: Springer Book Archive