Abstract
We have explored a meta-learning approach to improve the prediction accuracy of a classification system. In the meta-learning approach, a meta-classifier that learns the bias of a classifier is obtained so that it can evaluate the prediction made by the classifier for a given example and thereby improve the overall performance of a classification system. The paper discusses our meta-learning approach in details and presents some empirical results that show the improvement we can achieve with the meta-learning approach in a GA-based inductive learning environment.
This work was supported by Ministry of Education and Human Resources Development through Embedded Software Open Education Resource Center (ESC) at Sangmyung University.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Doan, A., Domingos, P., Halevy, A.: Learning to Match the Schemas of Data Sources: A Multistrategy Approach. Machine Learning 50(3), 279–301 (2003)
Major, R.L., Ragsdale, C.T.: An Aggregation Approach to the Classification Problem Using Multiple Prediction Experts. Information Processing and Management 36, 683–696 (2000)
Ishibuchi, H., Nakashima, T., Morisawa, T.: Voting in Fuzzy Rule-based Systems for Pattern Classification Problems. Fuzzy Sets and Systems (1999)
Giraud-Carrier, C., Vilalta, R., Brazdil, P.: Introduction to the Special Issue on Meta-Learning. Machine Learning 54(3), 187–193 (2004)
Esposito, F., Semerano, G., Fanizzi, N., Ferilli, S.: Multistrategy Theory Revision: Induction and Abduction in INTHELEX. Machine Learning 38, 133–156 (2000)
Fan, D.W., Chan, P.K., Stolfo, S.J.: A Comparative Evaluation of Combiner and Stacked Generalization. In: IMLM 1996, pp. 40–46 (1996)
Eick, C.F., Kim, Y.-J., Secomandi, N., Toto, E.: DELVAUX - An Environment that Learns Bayesian Rule-sets with Genetic Algorithms. In: The Third World Congress on Expert Systems (1996)
Duda, R., Hart, P., Nilsson, J.: Subjective Bayesian methods for rule-based inference systems. In: Proc. National Computer Conference, pp. 1075–1082 (1976)
Meir, R., Rätsch, G.: An Introduction to Boosting and Leveraging. In: Advanced Lectures on Machine Learning. LNCS, pp. 119–184. Springer, Heidelberg (2003)
Rätsch, G., Warmuth, M.K.: Maximizing the Margin with Boosting. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 334–350. Springer, Heidelberg (2002)
Dietterich, T.G.: An Experimental Comparison of Three Methods for Constructing Ensembles of Decision Trees: Bagging, boosting, and randomization. Machine Learning 40(2), 139–157 (1999)
Lugosi, G., Vayatis, N.: A Consistent Strategy for Boosting Algorithms. In: Kivinen, J., Sloan, R.H. (eds.) COLT 2002. LNCS (LNAI), vol. 2375, pp. 303–318. Springer, Heidelberg (2002)
El-Yaniv, R., Derbeko, P., Meir, R.: Variance Optimized Bagging. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, p. 60. Springer, Heidelberg (2002)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kim, Y., Hong, C. (2005). Learning the Bias of a Classifier in a GA-Based Inductive Learning Environment. In: Huang, DS., Zhang, XP., Huang, GB. (eds) Advances in Intelligent Computing. ICIC 2005. Lecture Notes in Computer Science, vol 3644. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11538059_96
Download citation
DOI: https://doi.org/10.1007/11538059_96
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28226-6
Online ISBN: 978-3-540-31902-3
eBook Packages: Computer ScienceComputer Science (R0)