Abstract
Dimensionality reduction on Riemannian manifolds is challenging due to the complex nonlinear data structures. While probabilistic principal geodesic analysis (PPGA) has been proposed to generalize conventional principal component analysis (PCA) onto manifolds, its effectiveness is limited to data with a single modality. In this paper, we present a novel Gaussian latent variable model that provides a unique way to integrate multiple PGA models into a maximum-likelihood framework. This leads to a well-defined mixture model of probabilistic principal geodesic analysis (MPPGA) on sub-populations, where parameters of the principal subspaces are automatically estimated by employing an Expectation Maximization algorithm. We further develop a mixture Bayesian PGA (MBPGA) model that automatically reduces data dimensionality by suppressing irrelevant principal geodesics. We demonstrate the advantages of our model in the contexts of clustering and statistical shape analysis, using synthetic sphere data, real corpus callosum, and mandible data from human brain magnetic resonance (MR) and CT images.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
1 Introduction
PCA has been widely used to analyze high-dimensional data due to its effectiveness in finding the most important principal modes for data representation [12]. Motivated by the nice properties of probabilistic modeling, a latent variable model of PCA for factor analysis was presented [18, 23]. Later, different variants of probabilistic PCA including Bayesian PCA [2] and mixture models of PCA [4] were developed for automatic data dimensionality reduction and clustering, respectively. It is important to extend all these models from flat Euclidean spaces to general Riemannian manifolds, where the data is typically equipped with smooth constraints. For instance, an appropriate representation of directional data, i.e., vectors of unit length in \(R^n\), is the sphere \(S^{n-1}\) [16]. Another important example of manifold data is in shape analysis, where the definition of the shape of an object should not depend on its position, orientation, or scale, i.e., Kendall shape space [14]. Other examples of manifold data include geometric transformations such as rotations and translations, symmetric positive-definite tensors [10, 25], Grassmannian manifolds (a set of m-dimensional linear subspaces of \(R^n\)), and Stiefel manifolds (the set of orthonormal m-frames in \(R^n\)) [24].
Data dimensionality reduction on manifolds is challenging due to the commonly used linear operations violate the natural constraints of manifold-valued data. In addition, basic statistical terms such as distance metrics, or data distributions vary on different types of manifolds [14, 17, 24]. A groundbreaking work, known as principal geodesic analysis (PGA), was the first to generalize PCA to nonlinear manifolds [10]. This method describes the geometric variability of manifold data by finding lower-dimensional geodesic subspaces that minimize the residual sum-of-squared geodesic distances to the data. Later on, an exact solution to PGA [19, 20] and a robust formulation for estimating the output results [1] were developed. The probabilistic interpretation of PGA was firstly introduced in [26], which paved a way for factor analysis on manifolds. Since PPGA only defines a single projection of the data, the scope of its application is limited to uni-modal distributions. A more natural and motivating solution is to model the multi-modal data structure with a collection or mixture of local sub-models. Current mixture models on a specific manifold generally employ a two-stage procedure: a clustering of the data projected in Euclidean space followed by performing PCA within each cluster [6]. None of these algorithms define a probability density.
In this paper, we derive a mixture of PGA models as a natural extension of PPGA [26], where all model parameters including the low-dimensional factors for each data cluster is estimated through the maximization of a single likelihood function. The theoretical foundation of developing generative models of principal geodesic analysis for multi-population studies on general manifolds is brand new. In addition, the algorithmic inference of our proposed method is nontrivial due to the complicated geometry of manifold-valued data and numerical issues. Compared to previous methods, the major advantages of our model are: (i) it leads to a unified algorithm that well integrates soft data clustering and principal subspaces estimation on general Riemannian manifolds; (ii) in contrast to the two-stage approach mentioned above, our model explicitly considers the reconstruction error of principal modes as a criterion for clustering tasks; and (iii) it provides a more powerful way to learn features from data in non-Euclidean spaces with multiple subpopulations. We showcase our model advantages from two distinct perspectives: automatic data clustering and dimensionality reduction for analyzing shape variability. In order to validate the effectiveness of the proposed algorithm, we compare its performance with the state-of-the-art methods on both synthetic and real datasets. We also briefly discuss a Bayesian version of our mixture PPGA model that equips with the functionality of automatic dimensionality selection on general manifold data.
2 Background: Riemannian Geometry and PPGA
In this section, we briefly review PPGA [26] defined on a smooth Riemannian manifold M, which is a generalization of PPCA [23] in Euclidean space. Before introducing the model, we first recap a few basic concepts of Riemannian geometry (more details are provided in [7]).
Covariant Derivative. The covariant derivative is a generalization of the Euclidean directional derivative to the manifold setting. Consider a curve \(c(t): [0,1] \rightarrow M\) and let \(\dot{c} = dc/dt\) be its velocity. Given a vector field V(t) defined along c, we can define the covariant derivative of V to be \(\frac{DV}{dt} = \nabla _{\dot{c}} V\) that reflects the change of the vector field \(\dot{c}\) in the V direction. A vector field is called parallel if the covariant derivative along the curve c is zero. A curve c is geodesic if it satisfies the equation \(\nabla _{\dot{c}} \dot{c} = 0\).
Exponential Map. For any point \(p \in M\) and tangent vector \(v \in T_p M\) (also known as the tangent space of M at p), there exists a unique geodesic curve c with initial conditions \(c(0) = p\) and \(\dot{c}(0) = v\). This geodesic is only guaranteed to exist locally. The Riemannian exponential map at p is defined as \(\mathrm{Exp}\,_p(v) = c(1)\). In other words, the exponential map takes a position and velocity as input and returns the point at time \(t=1\) along the geodesic with certain initial conditions. Notice that the exponential map is simply an addition in Euclidean space, i.e., \(\mathrm{Exp}\,_p(v) = p + v\).
Logarithmic Map. The exponential map is locally diffeomorphic onto a neighborhood of p. Let V(p) be the largest such neighborhood, the Riemannian log map, \(\mathrm{Log}\,_p: V(p) \rightarrow T_p M\), is an inverse of the exponential map within V(p). For any point \(q \in V(p)\), the Riemannian distance function is given by \(\mathrm{Dist}\,(p, q) = \Vert \mathrm{Log}\,_p(q)\Vert \). Similar to the exponential map, this logarithmic map is a subtraction in Euclidean space, i.e., \(\mathrm{Log}\,_p(q) = q - p\).
2.1 PPGA
Given an d-dimensional random variable \(y \in M\), the main idea of PPGA [26] is to model y as
where \(\mu \) is a base point on M, \(x \in \mathbb {R}^q\) is a q-dimensional latent variable, with \(x \sim N(0, I)\), B is an \(d \times q\) factor matrix that relates x and y, and \(\epsilon \) represents error. We will find it is convenient to model the factors as \(B = W \varLambda \), where W is a matrix with q columns of mutually orthogonal tangent vectors in \(T_\mu M\), \(\varLambda \) is a \(q \times q\) diagonal matrix of scale factors for the columns of W. This removes the rotation ambiguity of the latent factors and makes them analagous to the eigenvectors and eigenvalues of standard PCA (there is still of course an ambiguity of the ordering of the factors).
The likelihood of PPGA is defined by a generalization of the normal distribution \(\mathcal {N}(\mu , \tau ^{-1})\), called Riemannian normal distribution, with its precision parameter \(\tau \). Therefore, we have
This distribution is applicable to any Riemannian manifold, and the value of C in Eq. 2 does not depend on \(\mu \). It reduces to a multivariate normal distribution with isotropic covariance when \(M = \mathbb {R}^n\) (see [9] for details). Note that this noise model could be replaced with other different distributions according to different types of applications.
Now, the PPGA model for a random variable y in Eq. (1) can be defined as
3 Our Model: Mixture Probability Principal Geodesic Analysis (MPPGA)
We now introduce a mixture model of PPGA (MPPGA) that provides a tempting prospect of being able to model complex multi-modal data structures. This formulation allows all model parameters to be estimated from maximum-likelihood, where both an appropriate data clustering and the associated principal modes are jointly optimized.
Consider observed data \(y_n \in \{y_1, \cdots , y_N \}\) generated from K clusters on M (as shown in Fig. 1). We first introduce a K-dimensional binary random variable \(z_n\) with its k-th element \(z_{nk} \in \{0, 1\}\) as an indicator for n-th data point that belongs to cluster k, where \(k \in \{1, \cdots , K\}\). This indicates that \(z_{nk}=1\) with other value being zero if the data \(y_{n}\) is in cluster k. The probability of each random variable \(z_{n}\) is
where \(\pi _{k} \in [0, 1]\) is the model mixing coefficient that satisfies \(\sum \limits _{k=1}^{K}\pi _{k}=1\).
Analogous to PPGA in Eq. (1), the likelihood of each observed data \(y_n\) is
where \(x_{nk} \sim \mathcal {N}(0,I)\) is a latent random variable in \(\mathbb {R}^{q}\), \(\mu _k\) is a base point for each cluster k, \(W_k\) is a matrix with each columns representing the mutually orthogonal tangent vectors in \(T_{\mu _k}M\), and \(\varLambda _k\) is a diagonal matrix of scale factors for the columns of \(W_k\).
Combining Eq. (4) with Eq. (5), we obtain the complete data likelihood
The log of the data likelihood in Eq. (6) can be computed as
3.1 Inference
We employ a maximum likelihood expectation maximization (EM) method to estimate model parameters \(\theta =(\pi _k, \mu _k, W_k, \varLambda _k, \tau _k, x_{nk})\) and latent variables \(z_{nk}\). This scheme includes two main steps:
E-step. To treat the binary indicator \(z_{nk}\) fully as latent random variables, we integrate them out from the distribution defined in Eq. (6). Similar to typical Gaussian mixture models, the expectation value of the complete-data log likelihood function is
The expected value of the latent variable \(z_{nk}\), also known as the responsibility of component k for data point \(y_n\) [3], is then computed by its posterior distribution as
Recall that the Rimannian distance function \(\mathrm{Dist}\,(p, q) = \Vert \mathrm{Log}\,_p(q)\Vert \). We let \(\gamma _{nk} \triangleq \mathbb {E}[z_{nk}]\) and rewrite Eq. (8) as
where C is a normalizing constant.
M-step. We use gradient ascent to maximize the expectation function \(\mathbb {E} [\mathcal {L}]\) and update parameters \(\theta \). Since the maximization of the mixing coefficient \(\pi _k\) is the same as Gaussian mixture model [3], we only give its final close-form update here as \(\tilde{\pi }_{k}=\sum _{n=1}^{N} \gamma _{nk} / N\).
The computation of the gradient term requires we compute the derivative operator (Jacobian matrix) of the exponential map, i.e., \(d_{\mu _k} \mathrm{Exp}\,(\mu _k, s_{nk})\), or \(d_{s_{nk}} \mathrm{Exp}\,(\mu _k, s_{nk})\). Next, we briefly review the computations of derivatives w.r.t. the mean point \(\mu \) and the tangent vector s separately. Closed-form formulations of these derivatives in the space of sphere, or 2D Kendall shape space are provided in [11, 26].
For Derivative w.r.t. \(\mu \). Consider a variation of geodesics, e.g., \(c(h, t) = \mathrm{Exp}\,(\mathrm{Exp}\,(\mu , hu), ts(h))\), where \(u \in T_{\mu }M\) and s(h) comes from parallel translating s along the geodesic \(\mathrm{Exp}\,(\mu , hu)\). The derivative of this variation results in a Jacobi field: \(J_{\mu }(t) = dc/dh(0, t)\). This gives an expression for the exponential map derivative as \(d_{\mu } \mathrm{Exp}\,(\mu , s) = J_{\mu }(1)\) (as shown on the left panel of Fig. 2).
For Derivative w.r.t. s. Consider a variation of geodesics, e.g., \(c(h, t) = \mathrm{Exp}\,(\mu , hu + ts)\). Again, the derivative of the exponential map is given by a Jacobi field satisfying \(J_s(t) = dc/dh(0, t)\), and we have \(d_s \mathrm{Exp}\,(\mu , s) u = J_s(1)\) (as shown on the right panel of Fig. 2).
Now we are ready to derive all gradient terms of \(\mathbb {E} [\mathcal {L}]\) in Eq. 10 w.r.t. the parameters \(\theta \). For purpose of better readability, we simplify the notation by defining \(\mathrm{Log}\,(\cdot ) \triangleq \mathrm{Log}\,\left( \mathrm{Exp}\,(\mu _{k}, s_{nk}), y_n\right) \) in remaining sections.
Gradient for \(\mu _k\): the gradient of updating \(\mu _k\) is
where \(\dagger \) represents adjoint operator, i.e., for any tangent vectors \(\hat{u}\) and \(\hat{v}\),
Gradient for \(\tau _k\): the gradient of \(\tau _k\) is computed as
where \(A_{n-1}\) is the surface area of \(n-1\) hypershpere. r is radius, \(\kappa _\kappa \) is the sectional curvature. Here \(R=\text {min}_{v}{R(v)} \), where R(v) is the maximum distance of \(\text {Exp}(\mu _k, rv)\) with v being a point of unit sphere \(S^{n-1} \subset T_{\mu _k} M\). While this formula is only valid for simple connected symmetric spaces, other spaces should be changed according to different definitions of the probability density function in Eq. (2).
To derive the gradient w.r.t. \(W_k, \varLambda _k\) and \(x_{nk}\), we need to compute \(d (\mathrm{Log}\,(\cdot )^2) / d s_{nk}\) first. Analogous to Eq. 11, we have
After applying chain rule, we finally get all gradient terms as following:
Gradient for \(W_k\): the gradient term of \(W_k\) is
To maintain the mutual orthogonality of each column of \(W_k\), we consider \(W_k\) as a point in Stiefel manifold \(V_q(T_{\mu }M)\), i.e., the space of orthonormal q-frames in \(T_{\mu }M\), and project the gradient of Eq. 14 into tangent space \(T_{W_k} V_q(T_{\mu }M)\). We then update \(W_k\) by taking a small step along the geodesic in the projected gradient direction. For details on Stiefel manifold, see [8].
Gradient for \(\varLambda _k^a\): the gradient term of each a-th diagonal element of \(\varLambda _k\) is:
where \(W_k^{a}\) is the ath column of \(W_k\) and \(x_{nk}^{a}\) is the ath component of \(x_{nk}\).
Gradient for \(x_{nk}\): the gradient w.r.t. each \(x_{nk}\) is
In this section, we further develop a Bayesian variant of MPPGA that equips with the functionality of automatic data dimensionality reduction. A critical issue in maximum likelihood estimate of principal geodesic analysis is the choice of the number of principal geodesic to be retained. This also could be problematic in our proposed MPPGA model since we assume each cluster has different dimensions of principal subspaces, and an exhaustive search over the parameter space can become computationally intractable.
To address this issue, we develop a Bayesian mixture principal geodesic analysis (MBPGA) model that determines the number of principal modes automatically to avoid adhoc parameter tuning. We carefully introduces an automatic relevance determination (ARD) prior [3] on each ath diagonal element of the eigenvalue matrix \(\varLambda \) as
Each hyper-parameter \(\beta ^a\) controls the inverse variance of its corresponding principal geodesic \(W^a\), which is the ath column of W matrix. This indicates that if \(\beta ^a\) is particularly large, the corresponding \(W^a\) will tend to be small and will be effectively eliminated.
Incorporating this ARD prior into our MPPGA model defined in Eq. 7, we arrive at a log posterior distribution of \(\varLambda \) as
Analogous to the EM algorithm introduced in Sect. 3.1, we maximize over \(\varLambda ^a\) in M-step by using the following gradient:
Similar to the ARD prior discussed in [2], the hyper-parameter \(\beta ^a\) can be effectively estimated by \(\beta ^a=d / \Vert \varLambda ^a\Vert ^2\), where d is the dimension of the original data space.
4 Evaluation
We demonstrate the effectiveness of our MPPGA and MBPGA model by using both synthetic data and real data, and compare with two baseline methods K-means-PCA [6] and MPPCA [22] designed for multimodal Euclidean data. The geometry background of specific sphere and Kendall shape space including the computations of Riemannian exponential map, log map, and Jacobi fields can be found in [9, 26].
4.1 Data
Sphere. Using the generative model for PGA, we simulate a random sample of 764 data points on the unit sphere \(S^{2} \) with known parameters \(W, \varLambda , \tau \), and \(\pi \) (see Table 1). All data points consist three clusters (Green: 200; Blue: 289; Black: 275). Note that our ground truth \(\mu \) is generated from random uniform points on the sphere. The W is generated from a random Gaussian matrix, to which we then apply the Gram-Schmidt algorithm to ensure its columns are orthonormal.
Corpus Callosum Shape. The corpus callosum data are derived from public released Open Access Series of Imaging Studies (OASIS) database www.oasis-brains.org. It includes 32 magnetic resonance imaging scans of human brain subjects, with age from 19 to 90. The corpus callosum is segmented in a midsagittal slice using the ITK SNAP program www.itksnap.org. The boundaries of these segmentations are sampled with 64 points. This algorithm generates a sampling of a set of shape boundaries while enforcing correspondences between different point models within the population.
Mandible Shape. The mandible data is extracted from a collection of CT scans of human mandibles, with 77 subjects (36 female vs. 41 male) aged from 0 to 19. We sample \(2 \times 400\) points on the boundaries.
4.2 Experiments
We first run our EM algorithm estimation of both MPPGA and MBPGA to test whether we could recover the model parameters. To initialize the model parameters (e.g., the cluster mean \(\mu \), principal eigenvector matrix W, and eigenvalue \(\varLambda \)), we use the output of K-means algorithm followed by performing linear PCA within each cluster. We uniformly distribute the weight to each mixing coefficient, i.e., \(\pi _k = 1/K\). The initialization of all precision parameters \(\{\tau _k\}\) is 0.01. We compare our model with two existing algorithms - mixture probabilistic principal components (MPPCA) [22] and K-means-PCA [6] performed in Euclidean space. For fair comparison, we keep the number of clusters the same across all algorithms.
To further investigate the applicability of our model MPPGA to real data, we test on 2D shapes of corpus callosum to study brain degeneration. The idea is to identify shape differences between two sub-populations: healthy vs. control group by analyzing their shape variability. We also run the extended Bayesian version of our model MBPGA to automatically select a compact set of principal geodesics to represent data variability. We perform similar experiments on the 2D mandible shape data to study group differences across genders, as well as within-group shape variability that reflects localized regions of growth.
4.3 Results
Figure 3 compares the estimated results of our model MPPGA/MBPGA with two baseline methods K-means-PCA and MPPCA. For the purpose of visualization, we project the estimated principle modes of K-means-PCA and MPPCA model from Euclidean space onto the sphere. Our model automatically separates the sphere data into three groups, which aligns fairly well with the ground truth (Green: 200; Blue: 289; Black: 275). For geodesics in each cluster (ground truth in yellow and model estimate in red), our results overlap better with the ground truth than others. This also indicates that our model can recover the parameters closer to the truth (as shown in Table 1). In particular, the MBPGA model is able to automatically select an effective dimension of the principal subspaces to represent data variability.
Figure 4 demonstrates result of shape variations estimated by our model MPPGA and MBPGA. The corpus callosum shapes are automatically clustered into two different groups: healthy vs. control. An example of a segmented corpus callosum from brain MRI is shown in Fig. 4(a). Figure 4(b)–(e) show shape variations generated from points along the first principal geodesic: \(\mathrm{Exp}\,(\mu ,\alpha w^a)\), where \(\alpha =- 2, -1, 0, 1, 2 \times \sqrt{\lambda })\), for \(a = 1\). It is shown that the corpus callosum from healthy group is significantly larger than control group. Meanwhile, the anterior and posterior ends of the corpus callosum show larger variation than the mid-caudate, which is consistent with previous studies.
Figure 5 shows fairly close eigenvalues estimated by MPPGA and MBPGA on corpus callosum data. Since the ARD prior introduced in MBPGA automatically suppresses irrelevant principal geodesics to zero, we have 15 selected out of 128 in total.
We validate our MBPGA model to analyze the the mandible shape data (visualization of 2D examples are shown in Fig. 6(a)) since MBPGA produces fairly close results as MPPGA, but with the functionality of automatic data dimensionality reduction. The MBPGA model reduces the original data dimension from \(d=800\) to \(d=70\). Figure 6(b)(c) displays shape variations of mandibles from both male and female group. It clearly shows that generally male mandibles have larger variations than female mandibles, which is consistent with previous studies [5]. In particular, male mandibles have a larger variation in the temporal crest and the base of mandible.
5 Conclusion and Future Work
We presented a mixture model of PGA (MPPGA) on general Riemannian manifolds. We developed an Expectation Maximization for maximum likelihood estimation of parameters including the underlying principal subspaces and automatic data clustering results. This work takes the first step to generalize mixture models of principal mode analysis to Riemannian manifolds. A Bayesian variant of MPPGA (MBPGA) was also discussed in this paper for automatic dimensionality reduction. This model is particularly useful, as it avoids singularities that are associated with maximum likelihood estimations by suppressing the irrelevant information, e.g., outliers or noises. Our proposed model also paves a way for new tasks on manifolds such as hierarchical clustering and classification. Notice that all experiments conducted in this paper are with the number of clusters k being determined (e.g., healthy vs. control in corpus callosum data, or male vs. female in mandible data). For datasets with completely unknown clusters, current methods such as Elbow [15], Silhouhette [13], and Gap statistic methods [21] can be performed to determine the optimal number of clusters. This will be further investigated in our future work.
References
Banerjee, M., Jian, B., Vemuri, B.C.: Robust Fréchet mean and PGA on riemannian manifolds with applications to neuroimaging. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 3–15. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_1
Bishop, C.M.: Bayesian PCA. In: Advances in Neural Information Processing Systems, pp. 382–388 (1999)
Bishop, C.M.: Pattern recognition and machine learning, pp. 500–600 (2006)
Chen, J., Liu, J.: Mixture principal component analysis models for process monitoring. Ind. Eng. Chem. Res. 38(4), 1478–1488 (1999)
Chung, M.K., Qiu, A., Seo, S., Vorperian, H.K.: Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in ct images. Med. Image Anal. 22(1), 63–76 (2015)
Cootes, T.F., Taylor, C.J.: A mixture model for representing shape variation. Image Vis. Comput. 17(8), 567–573 (1999)
Do Carmo, M.: Riemannian Geometry. Birkhauser (1992)
Edelman, A., Arias, T.A., Smith, S.T.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20(2), 303–353 (1998)
Fletcher, P.T.: Geodesic regression and the theory of least squares on riemannian manifolds. Int. J. Comput. Vis. 105(2), 171–185 (2013)
Fletcher, P.T., Lu, C., Pizer, S.M., Joshi, S.: Principal geodesic analysis for the study of nonlinear statistics of shape. IEEE Trans. Med. Imaging 23(8), 995–1005 (2004)
Fletcher, P.T., Zhang, M.: Probabilistic geodesic models for regression and dimensionality reduction on riemannian manifolds. In: Turaga, P.K., Srivastava, A. (eds.) Riemannian Computing in Computer Vision, pp. 101–121. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-22957-7_5
Jolliffe, I.T.: Principal component analysis and factor analysis. In: Jolliffe, I.T. (ed.) Principal Component Analysis. Springer Series in Statistics, pp. 115–128. Springer, New York (1986). https://doi.org/10.1007/978-1-4757-1904-8_7
Kaufman, L., Rousseeuw, P.J.: Partitioning around medoids (program PAM). In: Finding Groups in Data: An Introduction to Cluster Analysis, pp. 68–125 (1990)
Kendall, D.G.: Shape manifolds, procrustean metrics, and complex projective spaces. Bull. Lond. Math. Soc. 16(2), 81–121 (1984)
Ketchen, D.J., Shook, C.L.: The application of cluster analysis in strategic management research: an analysis and critique. Strat. Manag. J. 17(6), 441–458 (1996)
Mardia, K.V., Jupp, P.E.: Directional Statistics, vol. 494. Wiley, Hoboken (2009)
Obata, M.: Certain conditions for a Riemannian manifold to be isometric with a sphere. J. Math. Soc. Jpn. 14(3), 333–340 (1962)
Roweis, S.T.: EM algorithms for PCA and SPCA. In: Advances in Neural Information Processing Systems, pp. 626–632 (1998)
Sommer, S., Lauze, F., Hauberg, S., Nielsen, M.: Manifold valued statistics, exact principal geodesic analysis and the effect of linear approximations. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6316, pp. 43–56. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15567-3_4
Sommer, S., Lauze, F., Nielsen, M.: Optimization over geodesics for exact principal geodesic analysis. Adv. Comput. Math. 40(2), 283–313 (2014)
Tibshirani, R., Walther, G., Hastie, T.: Estimating the number of clusters in a data set via the gap statistic. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 63(2), 411–423 (2001)
Tipping, M.E., Bishop, C.M.: Mixtures of probabilistic principal component analyzers. Neural Comput. 11(2), 443–482 (1999)
Tipping, M.E., Bishop, C.M.: Probabilistic principal component analysis. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 61(3), 611–622 (1999)
Turaga, P., Veeraraghavan, A., Srivastava, A., Chellappa, R.: Statistical computations on grassmann and stiefel manifolds for image and video-based recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2273–2286 (2011)
Tuzel, O., Porikli, F., Meer, P.: Pedestrian detection via classification on riemannian manifolds. IEEE Trans. Pattern Anal. Mach. Intell. 30(10), 1713–1727 (2008)
Zhang, M., Fletcher, P.T.: Probabilistic principal geodesic analysis. In: Advances in Neural Information Processing Systems, pp. 1178–1186 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, Y., Xing, J., Zhang, M. (2019). Mixture Probabilistic Principal Geodesic Analysis. In: Zhu, D., et al. Multimodal Brain Image Analysis and Mathematical Foundations of Computational Anatomy. MBIA MFCA 2019 2019. Lecture Notes in Computer Science(), vol 11846. Springer, Cham. https://doi.org/10.1007/978-3-030-33226-6_21
Download citation
DOI: https://doi.org/10.1007/978-3-030-33226-6_21
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33225-9
Online ISBN: 978-3-030-33226-6
eBook Packages: Computer ScienceComputer Science (R0)