A dual spectral projected gradient method for log-determinant semidefinite problems | Computational Optimization and Applications Skip to main content
Log in

A dual spectral projected gradient method for log-determinant semidefinite problems

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

We extend the result on the spectral projected gradient method by Birgin et al. in 2000 to a log-determinant semidefinite problem with linear constraints and propose a spectral projected gradient method for the dual problem. Our method is based on alternate projections on the intersection of two convex sets, which first projects onto the box constraints and then onto a set defined by a linear matrix inequality. By exploiting structures of the two projections, we show that the same convergence properties can be obtained for the proposed method as Birgin’s method where the exact orthogonal projection onto the intersection of two convex sets is performed. Using the convergence properties, we prove that the proposed algorithm attains the optimal value or terminates in a finite number of iterations. The efficiency of the proposed method is illustrated with the numerical results on randomly generated synthetic/deterministic data and gene expression data, in comparison with other methods including the inexact primal–dual path-following interior-point method, the Newton-CG primal proximal-point algorithm, the adaptive spectral projected gradient method, and the adaptive Nesterov’s smooth method. For the gene expression data, our results are compared with the quadratic approximation for sparse inverse covariance estimation method. We show that our method outperforms the other methods in obtaining a better objective value fast.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. Since DSPG is always dual feasible, dinf=0; also pinf=0 if the problem is unconstrained.

References

  1. Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)

    Article  MathSciNet  Google Scholar 

  2. Birgin, E.G., Martinez, J.M., Raydan, M.: Nonmonotone spectral projected gradient methods on convex sets. SIAM J. Optim. 10(4), 1196–1211 (2000)

    Article  MathSciNet  Google Scholar 

  3. Borwein, J., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples, 2nd edn. Springer, New York (2006)

    Book  Google Scholar 

  4. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  5. Chen, L., Sun, D.F., Toh, K.C.: An efficient inexact symmetric gauss–seidel based majorized admm for high-dimensional convex composite conic programming. Math. Program. 161(1–2), 237–270 (2017)

    Article  MathSciNet  Google Scholar 

  6. d’Aspremont, A., Banerjee, O., El Ghaoui, L.: First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30(1), 56–66 (2008)

    Article  MathSciNet  Google Scholar 

  7. Dempster, A.P.: Covariance selection. Biometrics 28, 157–175 (1972)

    Article  MathSciNet  Google Scholar 

  8. Duchi, J.C., Gould, S., Koller, D.: Projected subgradient methods for learning sparse gaussians. In: Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (2008)

  9. Hager, W., Zhang, H.: A new active set algorithm for box constrained optimization. SIAM J. Optim. 17(2), 526–557 (2006)

    Article  MathSciNet  Google Scholar 

  10. Hsieh, C.-J., Sustik, M.A., Dhillon, I.S., Ravikumar, P.: QUIC: quadratic approximation for sparse inverse covariance estimation. J. Mach. Learn. Res. 15, 2911–2947 (2014)

    MathSciNet  MATH  Google Scholar 

  11. Kim, S., Kojima, M., Mevissen, M., Yamashita, M.: Exploiting sparsity in linear and nonlinear matrix inequalities via positive semidefinite matrix completion. Math. Program. Ser. B 129(1), 33–68 (2011)

    Article  MathSciNet  Google Scholar 

  12. Lauritzen, S.L.: Graphical Models. The Clarendon Press/Oxford University Press, Oxford (1996)

    MATH  Google Scholar 

  13. Li, L., Toh, K.-C.: An inexact interior point method for l\(_1\)-regularized sparse covariance selection. Math. Program. Comput. 2(3–4), 291–315 (2010)

    Article  MathSciNet  Google Scholar 

  14. Li, P., Xiao, Y.: An efficient algorithm for sparse inverse covariance matrix estimation based on dual formulation. Comput. Stat. Data Anal. 128, 292–307 (2018)

    Article  MathSciNet  Google Scholar 

  15. Li, X.D., Sun, D.F., Toh, K.C.: A schur complement based semi-proximal admm for convex quadratic conic programming and extensions. Math. Program. 155(1–2), 333–373 (2016)

    Article  MathSciNet  Google Scholar 

  16. Li, X.D., Sun, D.F., Toh, K.C.: A block symmetric gauss–seidel decomposition theorem for convex composite quadratic programming and its applications. Math. Program. 175(1–2), 395–418 (2019)

    Article  MathSciNet  Google Scholar 

  17. Lu, Z.: Adaptive first-order methods for general sparse inverse covariance selection. SIAM J. Matrix Anal. Appl. 31(4), 2000–2016 (2010)

    Article  MathSciNet  Google Scholar 

  18. Tavakoli, R., Zhang, H.: A nonmonotone spectral projected gradient method for large-scale topology optimization problems. Numer. Algebra Control Optim. 2(2), 395–412 (2012)

    Article  MathSciNet  Google Scholar 

  19. Ueno, G., Tsuchiya, T.: Covariance regularization in inverse space. Q. J. R. Meteorol. Soc. 135, 1133–1156 (2009)

    Article  Google Scholar 

  20. Wang, C.: On how to solve large-scale log-determinant optimization problems. Comput. Optim. Appl. 64, 489–511 (2016)

    Article  MathSciNet  Google Scholar 

  21. Wang, C., Sun, D., Toh, K.-C.: Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm. SIAM J. Optim. 20(6), 2994–3013 (2010)

    Article  MathSciNet  Google Scholar 

  22. Yang, J., Sun, D., Toh, K.-C.: A proximal point algorithm for log-determinant optimization with group lasso regularization. SIAM J. Optim. 23(2), 857–893 (2013)

    Article  MathSciNet  Google Scholar 

  23. Yuan, X.: Alternating direction method for sparse covariance models. J. Sci. Comput. 51, 261–273 (2012)

    Article  MathSciNet  Google Scholar 

  24. Zhang, R.Y., Fattahi, S., Sojoudi, S.: Large-scale sparse inverse covariance estimation via thresholding and max-det matrix completion. Proc. Mach. Learn. Res 80, 5766–5775 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sunyoung Kim.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

M. F. was partially supported by JSPS KAKENHI (Grant No. 26330024), and by the Research Institute for Mathematical Sciences, a Joint Usage/Research Center located in Kyoto University; S. K. was supported by NRF 2017-R1A2B2005119; M. Y. was partially supported by JSPS KAKENHI (Grant No. 18K11176)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nakagaki, T., Fukuda, M., Kim, S. et al. A dual spectral projected gradient method for log-determinant semidefinite problems. Comput Optim Appl 76, 33–68 (2020). https://doi.org/10.1007/s10589-020-00166-2

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00166-2

Keywords

Mathematics Subject Classification