Adaptive graph-regularized fixed rank representation for subspace segmentation | Pattern Analysis and Applications Skip to main content
Log in

Adaptive graph-regularized fixed rank representation for subspace segmentation

  • Theoretical advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

Low-rank representation (LRR) has shown its great power in subspace segmentation tasks. However, by using matrix factorization skill, fixed-rank representation dominates LRR in many subspace segmentation applications. In this paper, based on the depth analyses on fixed-rank representation (FRR), we propose a new graph-regularized FRR method which is termed adaptive graph-regularized fixed-rank representation (AGFRR). Different from the existing methods which use the original data set to build the graph regularizer for a reconstruction coefficient matrix, AGFRR uses one of the matrix factor of a reconstruction matrix to construct the graph regularizer for the reconstruction matrix itself. We claim that the constructed graph regularizer can discover the manifold structure of a given data set more faithfully. Hence, AGFRR is more suitable for revealing the nonlinear subspace structures of data sets than FRR. Moreover, an optimization algorithm for solving AGFRR problem is also provided. Finally, the subspace segmentation experiments on both synthetic and real-world data sets show that AGFRR is superior to the existing LRR and FRR-related algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. Other matrix norms could also be adopted. For example, \(l_{1}\)-norm \(\Vert \mathbf {E}\Vert _{1} = \sum _{i=1}^n\sum _{j=1}^n|[\mathbf {E}]_{ij}|\) and Frobenius norm \(\Vert \mathbf {E}\Vert _F=\sqrt{\sum _{i=1}^n\sum _{j=1}^n[\mathbf {E}]_{ij}^2}\). \([\mathbf {E}]_{ij}\) represents the (ij)-th element of matrix \(\mathbf {E}\).

  2. It also contains a sequence of 5 motions which is called “dancing”. We neglect this sub-database in our experiments.

  3. segmentation error = 1 - segmentation accuracy.

References

  1. Elhamifar E, Vidal R (2009) Sparse subspace clustering. Proc IEEE Conf Comput Vis Pattern Recog 2:2790–2797

    Google Scholar 

  2. Liu G, Lin Z, Yu Y (2010) Robust subspace segmentation by low-rank representation. In: Proceedings of the 27th international conference on machine learning, pp 1–8

  3. Rao S, Tron R, Vidal R, Ma Y (2010) Motion segmentation in the presence of outlying, incomplete, or corrupted trajectories. IEEE Trans Pattern Anal Mach Intell 32(10):1832–1845

    Article  Google Scholar 

  4. Ma Y, Derksen H, Hong W, Wright J (2007) Segmentation of multivariate mixed data via lossy coding and compression. IEEE Trans Pattern Anal Mach Intell 29(9):1546–1562

    Article  Google Scholar 

  5. Wright J, Peng Y, Ma Y, Ganesh A, Rao S (2009) Robust principal component analysis: exact recovery of corrupted low-rank matrices by convex optimization. In: NIPS

  6. Belhumeur P, Hespanha J, Kriegman D (2002) Eigenfaces versus fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 19(7):711–720

    Article  Google Scholar 

  7. Bao BK, Liu G, Xu C, Yan S (2012) Inductive robust principal component analysis. IEEE Trans Image Process 21(8):3794–3800

    Article  MathSciNet  Google Scholar 

  8. Vidal R, Favaro P (2014) Low rank subspace clustering (LRSC). Pattern Recognit Lett 43:47–61

    Article  Google Scholar 

  9. Zhuang L, Gao H, Lin Z, Ma Y, Zhang X, Yu N (2012) Non-negative low rank and sparse graph for semi-supervised learning. In: CVPR, pp 2328–2335

  10. Zheng Y, Zhang X, Yang S, Jiao L (2013) Low-rank representation with local constraint for graph construction. Neurocomputing 122:398–405

    Article  Google Scholar 

  11. Tang K, Liu R, Zhang J (2014) Structure-constrained low-rank representation. IEEE Trans Neural Netw Learn Syst 25(12):2167–2179

    Article  Google Scholar 

  12. Wei L, Wang X, Yin J, Wu A (2016) Spectral clustering steered low-rank representation for subspace segmentation. J Vis Commun Image Represent 38:386–395

    Article  Google Scholar 

  13. Lu X, Wang Y, Yuan Y (2013) Graph-regularized low-rank representation for destriping of hyperspectral images. IEEE Trans Geosci Remote Sens 51(7–1):4009–4018

    Article  Google Scholar 

  14. Yin M, Gao J, Lin Z (2015) Laplacian regularized low-rank representation and its applications. IEEE Trans Pattern Anal Mach Intell 38(3):504–517

    Article  Google Scholar 

  15. Chen J, Yang J (2014) Robust subspace segmentation via low-rank representation. IEEE Trans Cybern 44:1432–1445

    Article  Google Scholar 

  16. Jiang J, Yang J, Cui Y, Luo L (2015) Mixed noise removal by weighted low rank model. Neurocomputing 151:817–826

    Article  Google Scholar 

  17. Liu G, Yan S (2011) Latent low-rank representation for subspace segmentation and feature extraction. In: ICCV

  18. Liu R, Lin Z, Torre FDl, Su Z (2012) Fixed-rank representation for unsupervised visual learning. In: CVPR

  19. Li P, Bu J, Xu B, He Z, Chen C, Cai D (2015) Sparse fixed-rank representation for robust visual analysis. Signal Process 110:222–231

    Article  Google Scholar 

  20. Duda RO, Hart PE, Stork DG (2000) Pattern classification, 2nd edn. Wiley-Interscience, New York

    MATH  Google Scholar 

  21. Roweis ST, Saul LK (2000) Nonlinear dimensionality reduction by locally linear embedding. Science 290:2323–2326

    Article  Google Scholar 

  22. Liu W, Ma X, Zhou Y, Tao D, Cheng J (2018) p-Laplacian regularization for scene recognition. IEEE Trans Cybern. https://doi.org/10.1109/TCYB.2018.2833843

    Article  Google Scholar 

  23. Liu W, Yang X, Tao D, Cheng J, Tang Y (2018) Multiview dimension reduction via Hessian multiset canonical correlations. Inf Fus 41:119–128

    Article  Google Scholar 

  24. Lin Z, Chen M, Wu L, Ma Y (2009) The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices. In: UIUC, Champaign, IL, USA, Tech. Rep. UILU-ENG-09-2215

  25. Cai JF, Candes EJ, Shen Z (2010) A singular value thresholding algorithmfor matrix completion. SIAM J Optim 4:1956–1982

    Article  MathSciNet  Google Scholar 

  26. Cheng B, Yang J, Yan S, Fu Y, Huang TS (2010) Learning with L1-graph for image analysis. IEEE Trans Image Process 19(4):858–866

    Article  MathSciNet  Google Scholar 

  27. Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15(6):1373–1396

    Article  Google Scholar 

  28. Lin Z, Liu R, Su Z (2011) Linearized alternating direction method with adaptive penalty for low-rank representation. In: Taylor J, Zemel RS, Bartlett PL, Pereira FCN, Weinberger KQ (eds) Proceeding of the neural information processing systems, NIPS 2011, Granada, Spain, pp 612–620

  29. Samaria F, Harter A (1994) Parameterisation of a stochastic model for human face identification. In: Proceedings of the 2nd IEEE workshop on applications of computer vision, pp 138–142

  30. Jonathon Phillips P, Wechsler H, Huang J, Rauss PJ (1998) The FERET database and evaluation procedure for face-recognition algorithms. Image Vis Comput 16(5):295–306

    Article  Google Scholar 

  31. Lee KC, Ho J, Driegman D (2005) Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans Pattern Anal Mach Intell 27(5):684–698

    Article  Google Scholar 

  32. Bartels R, Stewart G (1976) Solution of the matrix equation \(\text{ AX } + \text{ XB } = \text{ c }\). Commun ACM 19(3):275–276

    MathSciNet  Google Scholar 

Download references

Acknowledgements

Jun Yin’s Work is supported by the National Science Foundation of China (No. 61603243).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lai Wei.

Ethics declarations

Conflict of interest

The authors declared that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work; there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

By following the methodology of linear ADM [28], the augmented Lagrangian function of Eq. (4) can be presented as follows:

$$\begin{aligned} \mathbf {L}= & {} \Vert \mathbf {Z}-\mathbf {WV}\Vert _F^2+\beta \mathcal {R}(\mathbf {Z,V})\nonumber \\&+\lambda \Vert \mathbf {E}\Vert _{2,1} +<\mathbf {Y}_1,\mathbf {X-XZ-E}>\nonumber \\&+ <\mathbf {Y}_2,\mathbf {1}_n\mathbf {Z}-\mathbf {1}_n>+\mu /2\big (\Vert \mathbf {X-XZ-E}\Vert _F^2\nonumber \\&+\Vert \mathbf {1}_n\mathbf {Z}-\mathbf {1}_n\Vert _F^2\big ). \end{aligned}$$
(16)

Then the variables \(\mathbf {V}\), \(\mathbf {W}\), \(\mathbf {Z}\) and \(\mathbf {E}\) can be optimized by alternately minimizing \(\mathbf {L}\). The methods for updating \(\mathbf {V}\), \(\mathbf {W}\) and \(\mathbf {E}\) are similar to those in Algorithm 1. Hence, we focus on how to optimize \(\mathbf {Z}\) (Fig. 5).

UpdateZwith fixed other variables We abandon the irrelevant terms w.r.t \(\mathbf {Z}\) in Eq. (16), then we have

$$\begin{aligned}&\min \nolimits _{\mathbf {Z}} \Vert \mathbf {Z}-\mathbf {W}\mathbf {V}\Vert _F^2+\beta \mathcal {R}(\mathbf {Z,V})+\langle \mathbf {Y}_1,\mathbf {X-XZ}-\mathbf {E}\rangle \nonumber \\&\qquad +\langle \mathbf {Y}_2,\mathbf {1}_n\mathbf {Z}-\mathbf {1}_n\rangle \mu /2\big (\Vert \mathbf {X-XZ}-\mathbf {E}\Vert _F^2+\Vert \mathbf {1}_n\mathbf {Z}-\mathbf {1}_n\Vert _F^2\big ),\nonumber \\&\quad =\min \nolimits _{\mathbf {Z}} \Vert \mathbf {Z}-\mathbf {W}\mathbf {V}\Vert _F^2+\beta tr(\mathbf {Z}\mathbf {L_V}\mathbf {Z}^T)\nonumber \\&\qquad +\mu /2\big (\Vert \mathbf {X-XZ}-\mathbf {E}+\mathbf {Y}_1/\mu \Vert _F^2\nonumber \\&\qquad +\Vert \mathbf {1}_n\mathbf {Z}-\mathbf {1}_n+\mathbf {Y}_2/\mu \Vert _F^2\big ). \end{aligned}$$
(17)

By Taking the derivative of the above problem w.r.t \(\mathbf {Z}\) and setting it to be zero, the following equation holds:

$$\begin{aligned}&\big [2\mathbf {I}_n+\mu (\mathbf {X}^T\mathbf {X}+\mathbf {1}_n^T\mathbf {1})\big ]\mathbf {Z}+2\beta \mathbf {Z}\mathbf {L_v}\nonumber \\&\quad -\mu \big [\mathbf {X}^T(\mathbf {X}-\mathbf {E}+\mathbf {Y}_1/\mu ) +\mathbf {1}_n^t(\mathbf {1}_n-\mathbf {Y}_2/\mu )\big ]=0. \end{aligned}$$
(18)

It can be seen that Eq. (18) is a Sylvester equation [32] w.r.t. \(\mathbf {Z}\), which can be solved by using the MATLAB function lyap().

Then the entire algorithmic procedure for solving Problem (4) by using linear ADM could be summarized in Algorithm 2.

figure b

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, L., Zhou, R., Zhu, C. et al. Adaptive graph-regularized fixed rank representation for subspace segmentation. Pattern Anal Applic 23, 443–453 (2020). https://doi.org/10.1007/s10044-019-00786-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-019-00786-3

Keywords

Navigation