Abstract
The intrinsic multimodal structures are often encountered in real life cases. Also, reducing the dimensionality of data without losing the local and multimodal information is important in multivariate data analysis. This paper proposes a locality preserving technique, which we refer to as Multimodality Preserving Tensorized Feature Extraction (MPTFE). MPTFE works in tensor space extracting the features directly from the matrix patterns of images. Thus the local topological information of pixels in the images can be clearly preserved. In extracting the informative features, MPTFE aims at finding a large margin representation of data and keeping the intrinsic characteristics of data in addition to separating inter-class points. The orthogonal projection matrix of MPTFE is efficiently obtained by eigen-decomposition, thus the similarity based on Euclidean distance can be effectively preserved. The validity and feasibility of the proposed algorithm are verified through conducting extensive simulations, including data visualization and clustering, on three benchmark real life problems. The delivered visualization results show that our MPTFE algorithm is able to capture the intrinsic characteristic of the given data and exhibits large margins between multiple objects and cluster. Clustering evaluation results show that MPTFE delivers comparable accuracy to some widely used neighborhood preserving techniques.










Similar content being viewed by others
References
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15(6), 1373–1396 (2003)
Beymer, D., Poggio, T.: Image representations for visual learning. Science 272, 905–1909 (1996)
Cai, D., He, X.F., Han, J.W.: Isometric projection. In: Proceedings of the 22nd AAAI Conference on Artificial Intelligence (AAAI), Vancouver, Canada, pp. 528–533 (2007)
Chen, S.C., Zhu, Y.L., Zhang, D.Q., Yang, J.: Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA. Pattern Recognit. Lett. 26, 1157–1167 (2005)
Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel Based Learning Methods. Cambridge University Press, Cambridge (2000)
Donoho, D.L., Grimes, C.: Image manifolds which are isometric to Euclidean space. J. Math. Imaging Vis. 23, 5–24 (2005)
Geng, X., Zhan, D.C., Zhou, Z.H.: Supervised nonlinear dimensionality reduction for visualization and classification. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 35(6), 1098–1107 (2005)
He, X.F., Yan, S., Hu, Y., Niyogi, P., Zhang, H.: Face recognition using Laplacian faces. IEEE Trans. Patten Anal. Mach. Intell. 27(3), 328–340 (2005)
He, X.F., Cai, D., Niyogi, P.: Tensor subspace analysis. In: Advances in Neural Information Processing Systems, vol. 18, Vancouver, Canada (2005)
He, X.F., Cai, D., Niyogi, P.: Laplacian score for feature selection. In: Advances in Neural Information Processing Systems, vol. 18, Vancouver, Canada, pp. 507–514 (2005)
He, X.F., Cai, D., Yan, S., Zhang, H.: Neighborhood preserving embedding. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), Beijing, China, pp. 1208–1213 (2005)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Hull, J.J.: A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell. 16(5), 550–554 (1994)
Kokiopoulou, E., Saad, Y.: Orthogonal neighborhood preserving projections: a projection-based dimensionality reduction technique. IEEE Trans. Pattern Anal. Mach. Intell. 29(12), 2143–2156 (2007)
Kong, H., Teoh, E.K., Wang, J.G., Venkateswarlu, R.: Two dimensional Fisher discriminant analysis: Forget about small sample size problem. In: Proceedings of ICASSP, pp. 761–764 (2005)
Li, H., Jiang, T., Zhang, K.: Efficient and robust feature extraction by maximum margin criterion. IEEE Trans. Neural Netw. 17(1), 157–165 (2006)
Lovász, L., Plummer, M.D.: Matching Theory. North-Holland, Budapest (1986)
Martinez, A.M., Kak, A.C.: PCA versus LDA. IEEE Trans. Pattern Anal. Mach. Intell. 23(2), 228–233 (2001)
Ridder, D., Kouropteva, O., Okun, O., Pietikäinen, M., Duin, R.P.W.: Supervised locally linear embedding. In: Proceedings of the International Conference on Artificial Neural Networks, pp. 333–341 (2003)
Roweis, S., Saul, L.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)
Sugiyama, M.: Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis. J. Mach. Learn. Res. 8, 1027–1061 (2007)
Sugiyama, M., Idé, T., Nakajima, S., Sese, J.: Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Mach. Learn. 78(1–2), 35–61 (2010)
Sun, T.K., Chen, S.C.: Locality preserving CCA with applications to data visualization and pose estimation. Image Vis. Comput. 25(5), 531–543 (2007)
Tenenbaum, J.B., Silva, V., Langford, J.C.: A global geometric framework for nonlinear dimensionality reduction. Science 290(5500), 2319–2323 (2000)
Xu, D., Yan, S.: Semi-supervised bilinear subspace learning. IEEE Trans. Image Process. 18(7), 1671–1676 (2009)
Yang, J., Zhang, D., Frangi, A.F., Yang, J.Y.: Two-dimensional PCA: A new approach to appearance-based face representation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 26(4), 131–137 (2004)
Ye, Q.L., Zhao, C.X., Ye, N., Chen, Y.N.: Multi-weight vector projection support vector machines. Pattern Recognit. Lett. 31(13), 2006–2011 (2010)
Zhang, Z., Chow, T.W.S.: Maximum margin multisurface support tensor machines with application to image classification and segmentation. Expert Syst. Appl. 39, 849–860 (2012)
Zhang, Z., Ye, N.: Learning a tensor subspace for semi-supervised dimensionality reduction. Soft Comput. 15(2), 383–395 (2011)
Zhang, Z., Ye, N.: Locality preserving multimodal discriminative learning for supervised feature selection. Knowl. Inf. Syst. 27(3), 473–490 (2011)
Zhou, D., Bousquet, O., Lal, T., Weston, J., Schölkopf, B.: Learning with local and global consistency. In: Advances in Neural Information Processing Systems, vol. 16, pp. 321–328. MIT Press, Cambridge (2004)
Acknowledgements
The authors would like to express our sincere thanks to the anonymous reviewers’ comments and suggestions which have made the paper a higher standard.
Author information
Authors and Affiliations
Corresponding author
Appendix: Computational Analysis of U y and V y
Appendix: Computational Analysis of U y and V y
Let W (XY) and Q (XY) be two diagonal matrices whose entries are column and row sums of the matrix A (XY) respectively, that is \(W_{ii}^{( XY )} = \sum_{j} A_{i,j}^{( XY )}\) and \(Q_{jj}^{( XY)} = \sum_{i} A_{i,j}^{( XY )}\). Since ∥M∥2=tr(MM T), we can obtain

where

Similarly

where
and D (YY) is a diagonal matrix whose elements are column (or row) sums of A (YY), i.e. \(D_{ii}^{( YY )} = \sum_{j} A_{i,j}^{( YY )}\). Similarly, because ∥M∥2=tr(M T M), we also have

where

Similarly

where

Thus the projection matrices U y and V y can be obtained by solving the following two eigenvector problems in (25) and (26).
Rights and permissions
About this article
Cite this article
Wu, Sy., Zhang, Z. Tensorized Feature Extraction Technique for Multimodality Preserving Manifold Visualization. J Math Imaging Vis 44, 295–314 (2012). https://doi.org/10.1007/s10851-012-0327-1
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851-012-0327-1