Sparse Kernel Regressors | SpringerLink
Skip to main content

Sparse Kernel Regressors

  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 2001 (ICANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2130))

Included in the following conference series:

  • 3452 Accesses

Abstract

Sparse kernel regressors have become popular by applying the support vector method to regression problems. Although this approach has been shown to exhibit excellent generalization properties in many experiments, it suffers from several drawbacks: the absence of probabilistic outputs, the restriction to Mercer kernels, and the steep growth of the number of support vectors with increasing size of the training set. In this paper we present a new class of kernel regressors that effectively overcome the above problems. We call this new approach generalized LASSO regression. It has a clear probabilistic interpretation, produces extremely sparse solutions, can handle learning sets that are corrupted by outliers, and is capable of dealing with large-scale problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. D.I. Clark and M.R. Osborne. On linear restricted and interval least-squares problems. IMA Journal of Numerical Analysis, 8:23–36, 1988.

    Article  MATH  MathSciNet  Google Scholar 

  2. Ronan Collobert and Samy Bengio. Support vector machines for large-scale regression problems. Technical Report IDIAP-RR-00-17, IDIAP, Martigny, Switzerland, 2000.

    Google Scholar 

  3. J.A. Fessler. Grouped coordinate descent algorithms for robust edge-preserving image restoration. In SPIE, Image reconstruction and restoration II, volume 3170, pages 184–194, 1997.

    Google Scholar 

  4. J.H. Friedman. Multivariate adaptive regression splines. Annals of Stat., 19(1):1–82, 1991.

    Article  MATH  Google Scholar 

  5. Y. Grandvalet. Least absolute shrinkage is equivalent to quadratic penalization. In L. Niklasson, M. Bodén, and T. Ziemske, editors, ICANN’98, pages 201–206. Springer, 1998.

    Google Scholar 

  6. M.R. Osborne, B. Presnell, and B.A. Turlach. A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20(3):389–404, July 2000.

    Google Scholar 

  7. R.J. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, B 58(1):267–288, 1996.

    MathSciNet  Google Scholar 

  8. M.E. Tipping. The relevance vector machine. In S.A. Solla, T.K. Leen, and K.-R. Müller, editors, Neural Information Processing Systems, volume 12, pages 652–658. MIT Press, 1999.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Roth, V. (2001). Sparse Kernel Regressors. In: Dorffner, G., Bischof, H., Hornik, K. (eds) Artificial Neural Networks — ICANN 2001. ICANN 2001. Lecture Notes in Computer Science, vol 2130. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44668-0_48

Download citation

  • DOI: https://doi.org/10.1007/3-540-44668-0_48

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42486-4

  • Online ISBN: 978-3-540-44668-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics