Abstract
The k-means algorithm is one of the most widely used clustering heuristics. Despite its simplicity, analyzing its running time and quality of approximation is surprisingly difficult and can lead to deep insights that can be used to improve the algorithm. In this paper we survey the recent results in this direction as well as several extension of the basic k-means method.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
The computation of the SVD is a well-studied field of research. For an in-depth introduction to spectral algorithms and singular value decompositions, see [52].
- 3.
- 4.
- 5.
- 6.
This holds with constant probability and for any constant \(\varepsilon \).
- 7.
- 8.
- 9.
Note that Kanungo et al. use a better candidate set and thus give a \((25+\varepsilon )\)-approximation.
References
Achlioptas, D., McSherry, F.: On spectral learning of mixtures of distributions. In: Auer, P., Meir, R. (eds.) COLT 2005. LNCS (LNAI), vol. 3559, pp. 458–469. Springer, Heidelberg (2005). doi:10.1007/11503415_31
Ackermann, M.R., Blömer, J.: Coresets and approximate clustering for Bregman divergences. In: Proceedings of the 20th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2009), pp. 1088–1097. Society for Industrial and Applied Mathematics (SIAM) (2009). http://www.cs.uni-paderborn.de/uploads/tx_sibibtex/CoresetsAndApproximateClusteringForBregmanDivergences.pdf
Ackermann, M.R., Blömer, J.: Bregman clustering for separable instances. In: Kaplan, H. (ed.) SWAT 2010. LNCS, vol. 6139, pp. 212–223. Springer, Heidelberg (2010). doi:10.1007/978-3-642-13731-0_21
Ackermann, M.R., Blömer, J., Scholz, C.: Hardness and non-approximability of Bregman clustering problems. In: Electronic Colloquium on Computational Complexity (ECCC), vol. 18, no. 15, pp. 1–20 (2011). http://eccc.uni-trier.de/report/2011/015/, report no. TR11-015
Ackermann, M.R., Blömer, J., Sohler, C.: Clustering for metric and non-metric distance measures. ACM Trans. Algorithms 6(4), Article No. 59:1–26 (2010). Special issue on SODA 2008
Ackermann, M.R., Märtens, M., Raupach, C., Swierkot, K., Lammersen, C., Sohler, C.: Streamkm++: a clustering algorithm for data streams. ACM J. Exp. Algorithmics 17, Article No. 4, 1–30 (2012)
Aggarwal, A., Deshpande, A., Kannan, R.: Adaptive sampling for k-means clustering. In: Dinur, I., Jansen, K., Naor, J., Rolim, J. (eds.) APPROX/RANDOM -2009. LNCS, vol. 5687, pp. 15–28. Springer, Heidelberg (2009). doi:10.1007/978-3-642-03685-9_2
Ailon, N., Jaiswal, R., Monteleoni, C.: Streaming k-means approximation. In: Proceedings of the 22nd Annual Conference on Neural Information Processing Systems, pp. 10–18 (2009)
Aloise, D., Deshpande, A., Hansen, P., Popat, P.: NP-hardness of Euclidean sum-of-squares clustering. Mach. Learn. 75(2), 245–248 (2009)
Alsabti, K., Ranka, S., Singh, V.: An efficient \(k\)-means clustering algorithm. In: Proceeding of the First Workshop on High-Performance Data Mining (1998)
Arora, S., Kannan, R.: Learning mixtures of separated nonspherical Gaussians. Ann. Appl. Probab. 15(1A), 69–92 (2005)
Arthur, D., Manthey, B., Röglin, H.: \(k\)-means has polynomial smoothed complexity. In: Proceedings of the 50th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2009), pp. 405–414. IEEE Computer Society (2009)
Arthur, D., Vassilvitskii, S.: How slow is the k-means method? In: Proceedings of the 22nd ACM Symposium on Computational Geometry (SoCG 2006), pp. 144–153 (2006)
Arthur, D., Vassilvitskii, S.: k-means++: the advantages of careful seeding. In: Proceedings of the 18th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), pp. 1027–1035. Society for Industrial and Applied Mathematics (2007)
Arthur, D., Vassilvitskii, S.: Worst-case and smoothed analysis of the ICP algorithm, with an application to the \(k\)-means method. SIAM J. Comput. 39(2), 766–782 (2009)
Awasthi, P., Blum, A., Sheffet, O.: Stability yields a PTAS for k-median and k-means clustering. In: FOCS, pp. 309–318 (2010)
Awasthi, P., Charikar, M., Krishnaswamy, R., Sinop, A.K.: The hardness of approximation of Euclidean k-means. In: SoCG 2015 (2015, accepted)
Balcan, M.F., Blum, A., Gupta, A.: Approximate clustering without the approximation. In: SODA, pp. 1068–1077 (2009)
Banerjee, A., Guo, X., Wang, H.: On the optimality of conditional expectation as a Bregman predictor. IEEE Trans. Inf. Theory 51(7), 2664–2669 (2005)
Banerjee, A., Merugu, S., Dhillon, I.S., Ghosh, J.: Clustering with Bregman divergences. J. Mach. Learn. Res. 6, 1705–1749 (2005)
Belkin, M., Sinha, K.: Toward learning Gaussian mixtures with arbitrary separation. In: COLT, pp. 407–419 (2010)
Belkin, M., Sinha, K.: Learning Gaussian mixtures with arbitrary separation. CoRR abs/0907.1054 (2009)
Belkin, M., Sinha, K.: Polynomial learning of distribution families. In: FOCS, pp. 103–112 (2010)
Berkhin, P.: A survey of clustering data mining techniques. In: Kogan, J., Nicholas, C., Teboulle, M. (eds.) Grouping Multidimensional Data, pp. 25–71. Springer, Heidelberg (2006)
Braverman, V., Meyerson, A., Ostrovsky, R., Roytman, A., Shindler, M., Tagiku, B.: Streaming k-means on well-clusterable data. In: SODA, pp. 26–40 (2011)
Brubaker, S.C., Vempala, S.: Isotropic PCA and affine-invariant clustering. In: FOCS, pp. 551–560 (2008)
Chaudhuri, K., McGregor, A.: Finding metric structure in information theoretic clustering. In: COLT, pp. 391–402. Citeseer (2008)
Chaudhuri, K., Rao, S.: Learning mixtures of product distributions using correlations and independence. In: COLT, pp. 9–20 (2008)
Chen, K.: On coresets for k-median and k-means clustering in metric and Euclidean spaces and their applications. SIAM J. Comput. 39(3), 923–947 (2009)
Dasgupta, S.: Learning mixtures of Gaussians. In: FOCS, pp. 634–644 (1999)
Dasgupta, S.: How fast Is k-means? In: Schölkopf, B., Warmuth, M.K. (eds.) COLT-Kernel 2003. LNCS (LNAI), vol. 2777, p. 735. Springer, Heidelberg (2003). doi:10.1007/978-3-540-45167-9_56
Dasgupta, S.: The hardness of \(k\)-means clustering. Technical report CS2008-0916, University of California (2008)
Dasgupta, S., Schulman, L.J.: A probabilistic analysis of EM for mixtures of separated, spherical Gaussians. J. Mach. Learn. Res. 8, 203–226 (2007)
Feldman, D., Langberg, M.: A unified framework for approximating and clustering data. In: Proceedings of the 43th Annual ACM Symposium on Theory of Computing (STOC), pp. 569–578 (2011)
Feldman, D., Monemizadeh, M., Sohler, C.: A PTAS for \(k\)-means clustering based on weak coresets. In: Proceedings of the 23rd ACM Symposium on Computational Geometry (SoCG), pp. 11–18 (2007)
Feldman, J., O’Donnell, R., Servedio, R.A.: Learning mixtures of product distributions over discrete domains. SIAM J. Comput. 37(5), 1536–1564 (2008)
Fichtenberger, H., Gillé, M., Schmidt, M., Schwiegelshohn, C., Sohler, C.: BICO: BIRCH meets coresets for k-means clustering. In: Bodlaender, H.L., Italiano, G.F. (eds.) ESA 2013. LNCS, vol. 8125, pp. 481–492. Springer, Heidelberg (2013). doi:10.1007/978-3-642-40450-4_41
Frahling, G., Sohler, C.: Coresets in dynamic geometric data streams. In: Proceedings of the 37th STOC, pp. 209–217 (2005)
Gordon, A.: Null models in cluster validation. In: Gaul, W., Pfeifer, D. (eds.) From Data to Knowledge: Theoretical and Practical Aspects of Classification, Data Analysis, and Knowledge Organization, pp. 32–44. Springer, Heidelberg (1996)
Guha, S., Meyerson, A., Mishra, N., Motwani, R., O’Callaghan, L.: Clustering data streams: theory and practice. IEEE Trans. Knowl. Data Eng. 15(3), 515–528 (2003)
Hamerly, G., Drake, J.: Accelerating Lloyd’s algorithm for k-means clustering. In: Celebi, M.E. (ed.) Partitional Clustering Algorithms, pp. 41–78. Springer, Cham (2015)
Har-Peled, S., Kushal, A.: Smaller coresets for k-median and k-means clustering. Discrete Comput. Geom. 37(1), 3–19 (2007)
Har-Peled, S., Mazumdar, S.: On coresets for k-means and k-median clustering. In: Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC 2004), pp. 291–300 (2004)
Har-Peled, S., Sadri, B.: How fast is the k-means method? In: SODA, pp. 877–885 (2005)
Hartigan, J.A.: Clustering Algorithms. Wiley, Hoboken (1975)
Inaba, M., Katoh, N., Imai, H.: Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering (extended abstract). In: Symposium on Computational Geometry (SoCG 1994), pp. 332–339 (1994)
Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010)
Jain, A.K., Murty, M.N., Flynn, P.J.: Data clustering: a review. ACM Comput. Surv. 31(3), 264–323 (1999)
Jain, K., Vazirani, V.V.: Approximation algorithms for metric facility location and k-median problems using the primal-dual schema and Lagrangian relaxation. J. ACM 48(2), 274–296 (2001)
Judd, D., McKinley, P.K., Jain, A.K.: Large-scale parallel data clustering. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 871–876 (1998)
Kalai, A.T., Moitra, A., Valiant, G.: Efficiently learning mixtures of two Gaussians. In: STOC, pp. 553–562 (2010)
Kannan, R., Vempala, S.: Spectral algorithms. Found. Trends Theoret. Comput. Sci. 4(3–4), 157–288 (2009)
Kannan, R., Salmasian, H., Vempala, S.: The spectral method for general mixture models. SIAM J. Comput. 38(3), 1141–1156 (2008)
Kanungo, T., Mount, D.M., Netanyahu, N.S., Piatko, C.D., Silverman, R., Wu, A.Y.: An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 881–892 (2002)
Kanungo, T., Mount, D.M., Netanyahu, N.S., Piatko, C.D., Silverman, R., Wu, A.Y.: A local search approximation algorithm for \(k\)-means clustering. Comput. Geom. 28(2–3), 89–112 (2004)
Kumar, A., Kannan, R.: Clustering with spectral norm and the \(k\)-means algorithm. In: Proceedings of the 51st Annual Symposium on Foundations of Computer Science (FOCS 2010), pp. 299–308. IEEE Computer Society (2010)
Kumar, A., Sabharwal, Y., Sen, S.: Linear-time approximation schemes for clustering problems in any dimensions. J. ACM 57(2), Article No. 5 (2010)
Lloyd, S.P.: Least squares quantization in PCM. Bell Laboratories Technical Memorandum (1957)
MacQueen, J.B.: Some methods for classification and analysis of multivariate observations. In: Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297. University of California Press (1967)
Mahajan, M., Nimbhorkar, P., Varadarajan, K.: The planar k-means problem is NP-hard. In: Das, S., Uehara, R. (eds.) WALCOM 2009. LNCS, vol. 5431, pp. 274–285. Springer, Heidelberg (2009). doi:10.1007/978-3-642-00202-1_24
Manthey, B., Röglin, H.: Worst-case and smoothed analysis of k-means clustering with Bregman divergences. JoCG 4(1), 94–132 (2013)
Manthey, B., Rölin, H.: Improved smoothed analysis of the k-means method. In: Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pp. 461–470. Society for Industrial and Applied Mathematics (2009)
Matoušek, J.: On approximate geometric k-clustering. Discrete Comput. Geom. 24(1), 61–84 (2000)
Matula, D.W., Shahrokhi, F.: Sparsest cuts and bottlenecks in graphs. Discrete Appl. Math. 27, 113–123 (1990)
Moitra, A., Valiant, G.: Settling the polynomial learnability of mixtures of Gaussians. In: FOCS 2010 (2010)
Nock, R., Luosto, P., Kivinen, J.: Mixed Bregman clustering with approximation guarantees. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008. LNCS (LNAI), vol. 5212, pp. 154–169. Springer, Heidelberg (2008). doi:10.1007/978-3-540-87481-2_11
Ostrovsky, R., Rabani, Y., Schulman, L.J., Swamy, C.: The effectiveness of Lloyd-type methods for the k-means problem. In: FOCS, pp. 165–176 (2006)
Pelleg, D., Moore, A.W.: Accelerating exact k-means algorithms with geometric reasoning. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 277–281 (1999)
Selim, S.Z., Ismail, M.A.: \(k\)-means-type algorithms: a generalized convergence theorem and characterization of local optimality. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 6(1), 81–87 (1984)
Steinhaus, H.: Sur la division des corps matériels en parties. Bulletin de l’Académie Polonaise des Sciences IV(12), 801–804 (1956)
Tibshirani, R., Walther, G., Hastie, T.: Estimating the number of clusters in a dataset via the gap statistic. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 63, 411–423 (2001)
Vattani, A.: \(k\)-means requires exponentially many iterations even in the plane. In: Proceedings of the 25th ACM Symposium on Computational Geometry (SoCG 2009), pp. 324–332. Association for Computing Machinery (2009)
de la Vega, W.F., Karpinski, M., Kenyon, C., Rabani, Y.: Approximation schemes for clustering problems. In: Proceedings of the 35th Annual ACM Symposium on Theory of Computing (STOC 2003), pp. 50–58 (2003)
Vempala, S., Wang, G.: A spectral algorithm for learning mixture models. J. Comput. Syst. Sci. 68(4), 841–860 (2004)
Venkatasubramanian, S.: Choosing the number of clusters I-III (2010). http://blog.geomblog.org/p/conceptual-view-of-clustering.html. Accessed 30 Mar 2015
Zhang, T., Ramakrishnan, R., Livny, M.: BIRCH: a new data clustering algorithm and its applications. Data Min. Knowl. Disc. 1(2), 141–182 (1997)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing AG
About this chapter
Cite this chapter
Blömer, J., Lammersen, C., Schmidt, M., Sohler, C. (2016). Theoretical Analysis of the k-Means Algorithm – A Survey. In: Kliemann, L., Sanders, P. (eds) Algorithm Engineering. Lecture Notes in Computer Science(), vol 9220. Springer, Cham. https://doi.org/10.1007/978-3-319-49487-6_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-49487-6_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-49486-9
Online ISBN: 978-3-319-49487-6
eBook Packages: Computer ScienceComputer Science (R0)