Abstract
The approach of learning of multiple “related” tasks simultaneously has proven quite successful in practice; however, theoretical justification for this success has remained elusive. The starting point for previous work on multiple task learning has been that the tasks to be learned jointly are somehow “algorithmically related”, in the sense that the results of applying a specific learning algorithm to these tasks are assumed to be similar. We offer an alternative approach, defining relatedness of tasks on the basis of similarity between the example generating distributions that underline these task.
We provide a formal framework for this notion of task relatedness, which captures a sub-domain of the wide scope of issues in which one may apply a multiple task learning approach. Our notion of task similarity is relevant to a variety of real life multitask learning scenarios and allows the formal derivation of generalization bounds that are strictly stronger than the previously known bounds for both the learning-to-learn and the multitask learning scenarios. We give precise conditions under which our bounds guarantee generalization on the basis of smaller sample sizes than the standard single-task approach.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Baxter, J.: Learning Internal Representations. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, San Francisco (1995)
Baxter, J.: A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research 12, 149–198 (2000)
Ben-David, S., Gehrke, J., Schuller, R.: A Theoretical Framework for Learning from a Pool of Disparate Data Sources. In: Proceedings of the The 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (2002)
Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis Dimension. Journal of the Association for Computing Machinery 36(4), 929–965 (1989)
Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: COLT: Proceedings of the Workshop on Computational Learning Theory. Morgan Kaufmann Publishers, San Francisco (1998)
Caruana, R.: Multitask Learning. Machine Learning 28(1), 41–75 (1997)
Heskes, T.: Solving a Huge Number of Similar Tasks: A Combination of Multi-Task Learning and a Hierarchical Bayesian Approach. In: International Conference on Machine Learning, pp. 233–241 (1998)
Intrator, N., Edelman, S.: How to Make a Low-Dimensional Representation Suitable for Diverse Tasks. Connection Science 8 (1996)
Kearns, M.J., Vazirani, U.V.: An Introduction to Computational Learning Theory. MIT Press, Cambridge (1997)
Thrun, S.: Is learning the n-th thing any easier than learning the first? In: Touretzky, D., Mozer, M. (eds.) Advances in Neural Information Processing Systems (NIPS), pp. 640–646 (1996)
Vapnik, V., Chervonenkis, A.: On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities. Theoret. Probl. And Its Appl. 16(2), 264–280 (1971)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ben-David, S., Schuller, R. (2003). Exploiting Task Relatedness for Multiple Task Learning. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_41
Download citation
DOI: https://doi.org/10.1007/978-3-540-45167-9_41
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40720-1
Online ISBN: 978-3-540-45167-9
eBook Packages: Springer Book Archive