$$S_{1/2}$$ regularization methods and fixed point algorithms for affine rank minimization problems | Computational Optimization and Applications Skip to main content
Log in

\(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The affine rank minimization problem is to minimize the rank of a matrix under linear constraints. It has many applications in various areas such as statistics, control, system identification and machine learning. Unlike the literatures which use the nuclear norm or the general Schatten \(q~ (0<q<1)\) quasi-norm to approximate the rank of a matrix, in this paper we use the Schatten 1 / 2 quasi-norm approximation which is a better approximation than the nuclear norm but leads to a nonconvex, nonsmooth and non-Lipschitz optimization problem. It is important that we give a global necessary optimality condition for the \(S_{1/2}\) regularization problem by virtue of the special objective function. This is very different from the local optimality conditions usually used for the general \(S_q\) regularization problems. Explicitly, the global necessary optimality condition for the \(S_{1/2}\) regularization problem is a fixed point inclusion associated with the singular value half thresholding operator. Naturally, we propose a fixed point iterative scheme for the problem. We also provide the convergence analysis of this iteration. By discussing the location and setting of the optimal regularization parameter as well as using an approximate singular value decomposition procedure, we get a very efficient algorithm, half norm fixed point algorithm with an approximate SVD (HFPA algorithm), for the \(S_{1/2}\) regularization problem. Numerical experiments on randomly generated and real matrix completion problems are presented to demonstrate the effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. The SVT code is available at http://svt.caltech.edu.

  2. The FPC code is available at http://svt.caltech.edu.

  3. The FPCA code is available at http://www.columbia.edu/~sm2756/FPCA.htm.

  4. The PD code is available at http://www.sfu.ca/~yza30/homepage/PD_Rank/PD_Rank.html.

References

  1. Absil, P.-A., Baker, C., Gallivan, K.: Trust-region methods on Riemannian manifolds. Found. Comput. Math. 7, 303–330 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bian, W., Chen, X.: Worst-case complexity of smoothing quadratic regularization methods for non-Lipschitzian optimization. SIAM J. Optim. 23, 1718–1741 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  3. Boumal, N., Absil, P.: RTRMC: A riemannian trust-region method for low-rank matrix completion, In: NIPS (2011)

  4. Cai, J., Candès, E., Shen, Z.: A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 20, 1956–1982 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  5. Candès, E., Plan, Y.: Matrix completion with noise. Proc IEEE 98, 925–936 (2010)

    Article  Google Scholar 

  6. Candès, E., Recht, B.: Exact matrix completion via convex optimization. Found. Comput. Math. 9, 717–772 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  7. Candès, E., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52, 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  8. Candès, E., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory 56, 2053–2080 (2010)

    Article  MathSciNet  Google Scholar 

  9. Chartrand, R.: Exact reconstructions of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 14, 707–710 (2007)

    Article  Google Scholar 

  10. Chartrand, R.: Nonconvex regularization for shape preservation. IEEE Int. Conf. Image Process. I, 293–296 (2007)

    Google Scholar 

  11. Chartrand, R.: Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data. In: Proceedings of IEEE International Symposium on Biomedical Imaging, pp. 262–265 (2009)

  12. Chartrand, R., Staneva, V.: Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 24, 20–35 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  13. Chen, X., Ge, D., Wang, Z., Ye, Y.: Complexity of unconstrained \(\ell _2\)-\(\ell _p\) minimization. Math. Program. Ser. A 143, 371–383 (2014)

    Article  Google Scholar 

  14. Chen, X., Xu, F., Ye, Y.: Lower bound theory of nonzero entries in solutions of \(\ell _2-\ell _p\) minimization. SIAM J. Sci. Comput. 32, 2832–2852 (2010)

    Article  MathSciNet  Google Scholar 

  15. Donoho, D.: Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  16. Ding, C., Sun, D., Toh, K.-C.: An introduction to a class of matrix cone programming. Math. Program. Ser. A 144, 141–179 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  17. Drineas, P., Kannan, R., Mahoney, M.W.: Fast Monte Carlo algorithms for matrices ii: computing low-rank approximations to a matrix. SIAM J. Comput. 36, 158–183 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  18. Efron, B., Hastie, T., Johnstone, I.M., Tibshirani, R.: Least angle regression. Ann. Stat. 32, 407–499 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  19. Fazel, M.: Matrix rank minimization with applications. PhD thesis, Stanford University, (2002)

  20. Fazel, M., Hindi, H., Boyd, S.: A rank minimization heuristic with application to minimum order system approximation. In: Proceedings of American Control Conference (2001)

  21. Fazel, M., Hindi, H. , Boyd, S.: Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. In: Proceedings of American Control Conference (2003)

  22. Hale, E.T., Yin, W., Zhang, Y.: A fixed-point continuation method for \(\ell _1\)-regularized minimization: methodology and convergence. SIAM J. Optim. 19, 1107–1130 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  23. Ji, S., Sze, K.-F., Zhou, Z., So, A., Ye, Y.: Beyond convex relaxation: a polynomial-time non-convex optimization approach to network localization. In: IEEE Conference on Computer Communications (INFOCOM), pp. 2499–2507 (2013)

  24. Keshavan, R., Montanari, A., Oh, S.: Matrix completion from a few entries. IEEE Trans. Inf. Theory 56, 2980–2998 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  25. Lai, M.-J., Xu, Y., Yin, W.: Improved iteratively rewighted least squares for unconstrained smoothed \(\ell _p\) minimization. SIAM J. Numer. Anal. 5, 927–957 (2013)

    Article  MATH  Google Scholar 

  26. Lin, Z., Chen, M., Ma, Y.: The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices, NIPS (2011)

  27. Liu, Y., Sun, D., Toh, K.-C.: An implementable proximal point algorithmic framework for nuclear norm minimization. Math. Program. Ser. A 133, 399–436 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  28. Liu, Z., Vandenberghe, L.: Interior-point method for nuclear norm approximation with application to system identification. SIAM J. Matrix Anal. Appl. 31, 1235–1256 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  29. Lu, Z., Zhang, Y., Liu, X.: Penalty decomposition methods for rank minimization. Optim. Methods Softw. 30, 531–558 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  30. Ma, S., Goldfarb, D., Chen, L.: Fixed point and Bregman iterative methods for matrix rank minimization. Math. Program. Ser. A 128, 321–353 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  31. Mohan, K., Fazel, M.: Iterative reweighted algorithms for matrix rank minimization. J. Mach. Learn. Res. 13, 3253–3285 (2012)

    MathSciNet  MATH  Google Scholar 

  32. Nie, F., Huang, H., Ding, C.: Low-rank matrix recovery via efficient Schatten \(p\)-norm minimization. In: Proceedings of the twenty-sixth AAAI conference on artificial intelligence, pp. 655–661 (2012)

  33. Rakotomamonjy, A., Flamary, R., Gasso, G., Canu, S.: \(\ell _p\)-\(\ell _q\) penalty for sparse linear and sparse multiple kernel multitask learning. IEEE Trans. Neural Netw. 22, 1307–1320 (2011)

    Article  Google Scholar 

  34. Rao, G., Peng, Y., Xu, Z.: Robust sparse and low-rank matrix decomposition based on the \(S_{1/2}\) modeling. Sci. China Inf. Sci. 43, 733–748 (2013)

    Google Scholar 

  35. Recht, B., Fazel, M., Parrilo, P.: Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Rev. 52, 471–501 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  36. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  37. Rohde, A., Tsybakov, A.: Estimation of high-dimensional low-rank matrices. Ann. Stat. 39, 887–930 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  38. Skelton, R., Iwasaki, T., Grigoriadis, K.: A Unified Algebraic Approach to Linear Control Design. Taylor and Francis, Routledge (1998)

    Google Scholar 

  39. Sun, D.: Matrix Conic Programming. Dalian University of Science and Technology, Dalian (2011)

    Google Scholar 

  40. Tao, M., Yuan, X.: Recovering low-rank and sparse components of matrices from incomlete and noisy observations. SIAM J. Optim. 21, 57–81 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  41. Toh, K.-C., Yun, S.: An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. Pac. J. Optim. 6, 615–640 (2010)

    MathSciNet  MATH  Google Scholar 

  42. Tütüncü, R., Toh, K., Todd, M.: Solving semidefinite-quadratic-linear programs using SDPT3. Math. Program. Ser. B 95, 189–217 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  43. Wen, Z., Yin, W., Zhang, Y.: Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 4, 333–361 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  44. Xu, Z.: Data modeling: visual psychology approach and \(\ell _{1/2}\) regularization theory. Proc. Int. Congr. Math. 4, 3151–3184 (2010)

    MathSciNet  Google Scholar 

  45. Xu, Z., Chang, X., Xu, F., Zhang, H.: \(\ell _{1/2}\) regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23, 1013–1027 (2012)

    Article  Google Scholar 

  46. Xu, Z., Guo, H., Wang, Y., Zhang, H.: Representation of \(\ell _{1/2}\) regularizer among \(\ell _q (0 < q \le 1)\) regularizer: an experimental study based on phase diagram. Acta Autom. Sin. 38, 1225–1228 (2012)

    Google Scholar 

Download references

Acknowledgements

The authors are thankful to the two anonymous referees for their valuable suggestions and comments that help us revise the paper into the present form. The authors are also grateful to Prof. Shiqian Ma at The Chinese University of Hong Kong for sharing the FPCA code.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dingtao Peng.

Additional information

Dingtao Peng was supported by the National Natural Science Foundation of China (Grant No. 11401124) and the Scientific Research Projects for the Introduced Talents of Guizhou University (Grant No. 201343). Naihua Xiu was supported by the Key Program of National Natural Science Foundation of China (Grant No. 11431002) the National Natural Science Foundation of China (Grant No. 71271021).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, D., Xiu, N. & Yu, J. \(S_{1/2}\) regularization methods and fixed point algorithms for affine rank minimization problems. Comput Optim Appl 67, 543–569 (2017). https://doi.org/10.1007/s10589-017-9898-5

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-017-9898-5

Keywords

Mathematics Subject Classfication

Navigation