Abstract
The creation of a pattern classifier requires choosing or creating a model, collecting training data and verifying or “truthing” this data, and then training and testing the classifier. In practice, individual steps in this sequence must be repeated a number of times before the classifier achieves acceptable performance. The majority of the research in computational learning theory addresses the issues associated with training the classifier (learnability, convergence times, generalization bounds, etc.). While there has been modest research effort on topics such as cost-based collection of data in the context of a particular classifier model, there remain numerous unsolved problems of practical importance associated with the collection and truthing of data. Many of these can be addressed with the formal methods of computational learning theory. A number of these issues, as well as new ones — such as the identification of “hostile” contributors and their data — are brought to light by the Open Mind Initiative, where data is openly contributed over the World Wide Web by non-experts of varying reliabilities. This paper states generalizations of formal results on the relative value of labeled and unlabeled data to the realistic case where a labeler is not a foolproof oracle but is instead somewhat unreliable and error-prone. It also summarizes formal results on strategies for presenting data to labelers of known reliability in order to obtain best estimates of model parameters. It concludes with a call for a rich, powerful and practical computational theory of data acquisition and truthing, built upon the concepts and techniques developed for studying general learning systems.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
José M. Bernardo and Adrian F.M. Smith. Bayesian Theory. John Wiley and Sons, New York, NY, 1994.
David Blackwell and M.A. Girshick. Theory of Games and Statistical Decisions. Dover Publications, New York, NY, 1979.
Mindy Bokser, 1999. Personal communication (Caere Corporation).
John S. Breese, David Heckerman, and Carl Kadie. Empirical analysis of predictive algorithms for collaborative filtering. In Gregory F. Cooper and Serafín Moral, editors, Proceedings of the 14th Conference on Uncertainty in Artificial Intelligence (UAI-98), pages 43–52, San Francisco, 1998. Morgan Kaufmann.
Vittorio Castelli and Thomas M. Cover. The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter. Proc. IEEE Transactions on Information Theory, IT-42(6):2102–2117, 1996.
David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994.
Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. Wiley, New York, NY, second edition, 2001.
Usama M. Fayyad, Gregory Piatetsky-Shapiro, Padhraic Smyth, and Ramaswamy Uthurusamy, editors. Advances in knowledge discovery and data mining. MIT Press, Cambridge, MA, 1996.
Yoav Freund. Boosting a weak learning algorithm by majority. In Proceedings of the Third Workshop on Computational Learning Theory, pages 202–216, San Mateo, CA, 1990. Morgan Kaufmann.
Jerome H. Friedman. On bias, variance, 0/1-loss, and the curse-of-dimensionality. Data Mining and Knowledge Discovery, 1(1):55–77, 1997.
Tin Kam Ho and Henry S. Baird. Large-scale simulation studies in pattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-19(10):1067–1079, 1997.
Chuck Lam. Open Data Acquisition: Theory and Experiments. PhD thesis, Stanford University, Stanford, CA, 2002. in preparation.
Chuck Lam and David G. Stork. Optimal strategies for collecting and truthing data from contributors of varying reliabilities, 2001. submitted for publication.
Doug B. Lenat. CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11):33–38, 1995.
Tom M. Mitchell. Machine Learning. McGraw-Hill, New York, 1997.
David G. Stork. The Open Mind Initiative. IEEE Intelligent Systems & their applications, 14(3):19–20, 1999.
David G. Stork. Open data collection for training intelligent software in the Open Mind Initiative. In Proceedings of the Engineering Intelligent Systems Conference (EIS 2000), Paisley, Scotland, 2000.
David G. Stork. An architecture supporting the collection and monitoring of data openly contributed over the World Wide Web. In Proceedings of the Workshop on Enabling Technologies, Infrastructure for Collaborative Enterprises (WET ICE), Cambridge, MA, 2001.
Richard Valliant, Alan H. Dorfman, and Richard M. Royall. Finite population sampling and inference: A prediction approach. Wiley Series in Probability and Statistics. John Wiley and Sons, New York, NY, 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Stork, D.G. (2001). Toward a Computational Theory of Data Acquisition and Truthing. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_13
Download citation
DOI: https://doi.org/10.1007/3-540-44581-1_13
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42343-0
Online ISBN: 978-3-540-44581-4
eBook Packages: Springer Book Archive