Abstract
In this paper, we consider the possibility of obtaining a kernel machine that is sparse in feature space and smooth in output space. Smooth in output space implies that the underlying function is supposed to have continuous derivatives up to some order. Smoothness is achieved by applying a roughness penalty, a concept from the area of functional data analysis. Sparseness is taken care of by automatic relevance determination. Both are combined in a Bayesian model, which has been implemented and tested. Test results are presented in the paper.
An erratum to this chapter can be found at http://dx.doi.org/10.1007/11550907_163 .
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Aizerman, M., Braverman, E., Rozonoèr, L.: Theoretical foundations of the potential function method in pattern recognition learning. Automation and Remote Control 25, 821–837 (1964)
Schölkopf, B., Smola, A.: Learning with Kernels. In: Adaptive Computation and Machine Learning. The MIT Press, Cambridge (2002)
Vapnik, V.: The Nature of Statistical Learning Theory. In: Statistics for Engineering and Information Science. Springer, New York (1995)
Lee, Y., Mangasarian, O.: SSVM: A smooth support vector machine for classification. Computational Optimization and Applications 20, 5–22 (2001)
Wahba, G.: Spline Models for Observational Data. In: CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics, vol. 59 (1990)
Green, P., Silverman, B.: Nonparametric Regression and Generalized Linear Models: A Roughness Penalty Approach. Chapman and Hall, Boca Raton (1993)
Ramsay, J., Silverman, B.: Applied Functional Data Analysis. Springer, Heidelberg (2002)
Ramsay, J., Silverman, B.: Functional Data Analysis. Springer Series in Statistics. Springer, New York (1997)
MacKay, D.: A practical bayesian framework for backprop networks. Neural Computation 4, 448–472 (1992)
Tipping, M.: Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211–244 (2001)
Tipping, M., Faul, A.: Fast marginal likelihood maximisation for sparse bayesian models. In: Bishop, C., Frey, B. (eds.) Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, Florida, January 3-6 (2003)
Figueiredo, M.: Adaptive sparseness for supervised learning. IEEE Transactions on Pattern Analysis and Machine Intelligence 25, 1150–1159 (2003)
Engel, Y., Mannor, S., Meir, R.: The kernel recursive least squares algorithm. In: ICNC 2003 001, Interdisciplinary Center for Neural Computation, Hebrew University, Jerusalem, Israel (2003)
Engel, Y., Mannor, S., Meir, R.: Sparse online greedy support vector regression. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430, p. 84. Springer, Heidelberg (2002)
Ma, J., Theiler, J., Perkins, S.: Accurate on-line support vector regression. Neural Computation 15, 2683–2704 (2003)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
ter Borg, R.W., Rothkrantz, L.J.M. (2005). Smooth Bayesian Kernel Machines. In: Duch, W., Kacprzyk, J., Oja, E., Zadrożny, S. (eds) Artificial Neural Networks: Formal Models and Their Applications – ICANN 2005. ICANN 2005. Lecture Notes in Computer Science, vol 3697. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11550907_91
Download citation
DOI: https://doi.org/10.1007/11550907_91
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-28755-1
Online ISBN: 978-3-540-28756-8
eBook Packages: Computer ScienceComputer Science (R0)