Abstract
We describe and analyze a new approach for feature ranking in the presence of categorical features with a large number of possible values. It is shown that popular ranking criteria, such as the Gini index and the misclassification error, can be interpreted as the training error of a predictor that is deduced from the training set. It is then argued that using the generalization error is a more adequate ranking criterion. We propose a modification of the Gini index criterion, based on a robust estimation of the generalization error of a predictor associated with the Gini index. The properties of this new estimator are analyzed, showing that for most training sets, it produces an accurate estimation of the true generalization error. We then address the question of finding the optimal predictor that is based on a single categorical feature. It is shown that the predictor associated with the misclassification error criterion has the minimal expected generalization error. We bound the bias of this predictor with respect to the generalization error of the Bayes optimal predictor, and analyze its concentration properties.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Antos, A., Devroye, L., Gyorfi, L.: Lower bounds for bayes error estimation. IEEE Trans. Pattern Anal. Mach. Intell. 21(7) (1999)
Antos, A., Kontoyiannis, I.: Convergence properties of functional estimates for discrete distributions. Random Struct. Algorithms, 19(3-4) (2001)
Lopez de Mantaras, R.: A distance-based attribute selection measure for decision tree induction. Machine Learning Journal (1991)
Drukh, E., Mansour, Y.: Concentration bounds for unigrams language model. JMLR (2005)
Good, I.J.: The population frequencies of species and the estimation of pulation parameters. Biometrika (1953)
Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, Heidelberg (2001)
Kearns, M., Mansour, Y.: On the boosting ability of top-down decision tree learning algorithms. In: STOC (1996)
McAllester, D.A., Schapire, R.E.: On the convergence rate of good-turing estimators. In: COLT (2000)
McDiarmid, C.: On the method of bounded differences. Surveys in Combinatorics, pp. 148–188 (1989)
Mingers, J.: An empirical comparison of selection measures for decision-tree induction. Machine Learning (1989)
Mitchell, T.M.: Machine Learning. McGraw-Hill, New York (1997)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)
Sabato, S., Shalev-Shwartz, S.: Prediction by categorical features. Technical report (2007)
Torkkola, K.: Feature Extraction, Foundations and Applications. In: chapter Information-Theoretic Methods, Springer, Heidelberg (2006)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2007 Springer Berlin Heidelberg
About this paper
Cite this paper
Sabato, S., Shalev-Shwartz, S. (2007). Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking . In: Bshouty, N.H., Gentile, C. (eds) Learning Theory. COLT 2007. Lecture Notes in Computer Science(), vol 4539. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72927-3_40
Download citation
DOI: https://doi.org/10.1007/978-3-540-72927-3_40
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-72925-9
Online ISBN: 978-3-540-72927-3
eBook Packages: Computer ScienceComputer Science (R0)