Abstract
This paper presents a new methodology for learning from examples, called Classification by Feature Partitioning (CFP). Learning in CFP is accomplished by storing the objects separately in each feature dimension as disjoint partitions of values. A partition is expanded through generalization or specialized by subdividing it into sub-partitions. It is shown that the CFP algorithm has a low sample and training complexity.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
D. W. Aha, D. Kibler and M. K. Albert, Instance-Based Learning Algorithms. Machine Learning 6 37–66, 1991.
H. A. Güvenir and İ. Şirin, A Genetic Algorithm for Classification by Feature Partitioning, Proceedings of the ICGA'93, Illinois, 1993.
J. R. Quinlan, Inductions of Decision Trees. Machine Learning 1, 81–106, 1986.
L. Rendell and H. Cho, Empirical Learning as a function of Concept Character, Machine Learning 5 267–298, 1990.
İ Şirin and H. A. Güvenir, An Algorithm for Classification by Feature Partitioning. Technical Report CIS-9301, Bilkent University, Dept. of Computer Engineering and Information Science, Ankara, 1993.
L.G. Valiant, A Theory of the Learnable. Communications of the ACM, 27 (11) 1134–1142, 1984.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Güvenir, H.A., Şirin, İ. (1993). Complexity of the CFP, a method for Classification based on Feature Partitioning. In: Torasso, P. (eds) Advances in Artificial Intelligence. AI*IA 1993. Lecture Notes in Computer Science, vol 728. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-57292-9_58
Download citation
DOI: https://doi.org/10.1007/3-540-57292-9_58
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-57292-3
Online ISBN: 978-3-540-48038-9
eBook Packages: Springer Book Archive