Tensorized Feature Extraction Technique for Multimodality Preserving Manifold Visualization | Journal of Mathematical Imaging and Vision Skip to main content
Log in

Tensorized Feature Extraction Technique for Multimodality Preserving Manifold Visualization

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

The intrinsic multimodal structures are often encountered in real life cases. Also, reducing the dimensionality of data without losing the local and multimodal information is important in multivariate data analysis. This paper proposes a locality preserving technique, which we refer to as Multimodality Preserving Tensorized Feature Extraction (MPTFE). MPTFE works in tensor space extracting the features directly from the matrix patterns of images. Thus the local topological information of pixels in the images can be clearly preserved. In extracting the informative features, MPTFE aims at finding a large margin representation of data and keeping the intrinsic characteristics of data in addition to separating inter-class points. The orthogonal projection matrix of MPTFE is efficiently obtained by eigen-decomposition, thus the similarity based on Euclidean distance can be effectively preserved. The validity and feasibility of the proposed algorithm are verified through conducting extensive simulations, including data visualization and clustering, on three benchmark real life problems. The delivered visualization results show that our MPTFE algorithm is able to capture the intrinsic characteristic of the given data and exhibits large margins between multiple objects and cluster. Clustering evaluation results show that MPTFE delivers comparable accuracy to some widely used neighborhood preserving techniques.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)

    Article  MATH  Google Scholar 

  2. Beymer, D., Poggio, T.: Image representations for visual learning. Science 272, 905–1909 (1996)

    Article  Google Scholar 

  3. Cai, D., He, X.F., Han, J.W.: Isometric projection. In: Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI), Vancouver, Canada, pp. 528–533 (2007)

    Google Scholar 

  4. Chen, S.C., Zhu, Y.L., Zhang, D.Q., Yang, J.: Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA. Pattern Recognit. Lett. 26, 1157–1167 (2005)

    Article  Google Scholar 

  5. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel Based Learning Methods. Cambridge University Press, Cambridge (2000)

    Google Scholar 

  6. Donoho, D.L., Grimes, C.: Image manifolds which are isometric to Euclidean space. J. Math. Imaging Vis. 23, 5–24 (2005)

    Article  MathSciNet  Google Scholar 

  7. Geng, X., Zhan, D.C., Zhou, Z.H.: Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 35(6), 1098–1107 (2005)

    Article  Google Scholar 

  8. He, X.F., Yan, S., Hu, Y., Niyogi, P., Zhang, H.: Face recognition using Laplacian faces. IEEE Trans. Patten Anal. Mach. Intell. 27(3), 328–340 (2005)

    Article  Google Scholar 

  9. He, X.F., Cai, D., Niyogi, P.: Tensor subspace analysis. In: Advances in Neural Information Processing Systems, vol. 18, Vancouver, Canada (2005)

    Google Scholar 

  10. He, X.F., Cai, D., Niyogi, P.: Laplacian score for feature selection. In: Advances in Neural Information Processing Systems, vol. 18, Vancouver, Canada, pp. 507–514 (2005)

    Google Scholar 

  11. He, X.F., Cai, D., Yan, S., Zhang, H.: Neighborhood preserving embedding. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Beijing, China, pp. 1208–1213 (2005)

    Google Scholar 

  12. Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 550–554 (1994)

    Article  Google Scholar 

  14. Kokiopoulou, E., Saad, Y.: Orthogonal neighborhood preserving projections: a projection-based dimensionality reduction technique. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2143–2156 (2007)

    Article  Google Scholar 

  15. Kong, H., Teoh, E.K., Wang, J.G., Venkateswarlu, R.: Two dimensional Fisher discriminant analysis: Forget about small sample size problem. In: Proceedings of ICASSP, pp. 761–764 (2005)

    Google Scholar 

  16. Li, H., Jiang, T., Zhang, K.: Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17(1), 157–165 (2006)

    Article  Google Scholar 

  17. Lovász, L., Plummer, M.D.: Matching Theory. North-Holland, Budapest (1986)

    MATH  Google Scholar 

  18. Martinez, A.M., Kak, A.C.: PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 228–233 (2001)

    Article  Google Scholar 

  19. Ridder, D., Kouropteva, O., Okun, O., Pietikäinen, M., Duin, R.P.W.: Supervised locally linear embedding. In: Proceedings of the International Conference on Artificial Neural Networks, pp. 333–341 (2003)

    Google Scholar 

  20. Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)

    Article  Google Scholar 

  21. Sugiyama, M.: Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 8, 1027–1061 (2007)

    MATH  Google Scholar 

  22. Sugiyama, M., Idé, T., Nakajima, S., Sese, J.: Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Mach. Learn. 78(1–2), 35–61 (2010)

    Article  Google Scholar 

  23. Sun, T.K., Chen, S.C.: Locality preserving CCA with applications to data visualization and pose estimation. Image Vis. Comput. 25(5), 531–543 (2007)

    Article  MATH  Google Scholar 

  24. Tenenbaum, J.B., Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)

    Article  Google Scholar 

  25. Xu, D., Yan, S.: Semi-supervised bilinear subspace learning. IEEE Trans. Image Process. 18(7), 1671–1676 (2009)

    Article  MathSciNet  Google Scholar 

  26. Yang, J., Zhang, D., Frangi, A.F., Yang, J.Y.: Two-dimensional PCA: A new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(4), 131–137 (2004)

    Article  Google Scholar 

  27. Ye, Q.L., Zhao, C.X., Ye, N., Chen, Y.N.: Multi-weight vector projection support vector machines. Pattern Recognit. Lett. 31(13), 2006–2011 (2010)

    Article  Google Scholar 

  28. Zhang, Z., Chow, T.W.S.: Maximum margin multisurface support tensor machines with application to image classification and segmentation. Expert Syst. Appl. 39, 849–860 (2012)

    Article  Google Scholar 

  29. Zhang, Z., Ye, N.: Learning a tensor subspace for semi-supervised dimensionality reduction. Soft Comput. 15(2), 383–395 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  30. Zhang, Z., Ye, N.: Locality preserving multimodal discriminative learning for supervised feature selection. Knowl. Inf. Syst. 27(3), 473–490 (2011)

    Article  Google Scholar 

  31. Zhou, D., Bousquet, O., Lal, T., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems, vol. 16, pp. 321–328. MIT Press, Cambridge (2004)

    Google Scholar 

Download references

Acknowledgements

The authors would like to express our sincere thanks to the anonymous reviewers’ comments and suggestions which have made the paper a higher standard.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhao Zhang.

Appendix: Computational Analysis of U y and V y

Appendix: Computational Analysis of U y and V y

Let W (XY) and Q (XY) be two diagonal matrices whose entries are column and row sums of the matrix A (XY) respectively, that is \(W_{ii}^{( XY )} = \sum_{j} A_{i,j}^{( XY )}\) and \(Q_{jj}^{( XY)} = \sum_{i} A_{i,j}^{( XY )}\). Since ∥M2=tr(MM T), we can obtain

(38)

where

Similarly

(39)

where

$$L_{V}^{ \leftarrow} = \sum_{i} D_{ii}^{( YY)}Y_{i}V_{y}V_{y}^{\mathrm{T}}Y_{i}^{\mathrm{T}},\quad S_{V}^{ \leftarrow} =\sum_{i,j} A_{i,j}^{( YY)}Y_{i}V_{y}V_{y}^{\mathrm{T}}Y_{j}^{\mathrm{T}}$$

and D (YY) is a diagonal matrix whose elements are column (or row) sums of A (YY), i.e. \(D_{ii}^{( YY )} = \sum_{j} A_{i,j}^{( YY )}\). Similarly, because ∥M2=tr(M T M), we also have

(40)

where

Similarly

(41)

where

Thus the projection matrices U y and V y can be obtained by solving the following two eigenvector problems in (25) and (26).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Wu, Sy., Zhang, Z. Tensorized Feature Extraction Technique for Multimodality Preserving Manifold Visualization. J Math Imaging Vis 44, 295–314 (2012). https://doi.org/10.1007/s10851-012-0327-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-012-0327-1

Keywords