Abstract
Euclidean distance matrix optimization with ordinal constraints (EDMOC) has found important applications in sensor network localization and molecular conformation. It can also be viewed as a matrix formulation of multidimensional scaling, which is to embed n points in a r-dimensional space such that the resulting distances follow the ordinal constraints. The ordinal constraints, though proved to be quite useful, may result in only zero solution when too many are added, leaving the feasibility of EDMOC as a question. In this paper, we first study the feasibility of EDMOC systematically. We show that if \(r\ge n-2\), EDMOC always admits a nontrivial solution. Otherwise, it may have only zero solution. The latter interprets the numerical observations of ’crowding phenomenon’. Next we overcome two obstacles in designing fast algorithms for EDMOC, i.e., the low-rankness and the potential huge number of ordinal constraints. We apply the technique developed in Zhou et al. (IEEE Trans Signal Process 66(3):4331–4346 2018) to take the low rank constraint as the conditional positive semidefinite cone with rank cut. This leads to a majorization penalty approach. The ordinal constraints are left to the subproblem, which is exactly the weighted isotonic regression, and can be solved by the enhanced implementation of Pool Adjacent Violators Algorithm (PAVA). Extensive numerical results demonstrate the superior performance of the proposed approach over some state-of-the-art solvers.
Similar content being viewed by others
Notes
The code can be downloaded from http://www-stat.stanford.edu/candes/SortedSL1
The code can be downloaded from http://tosca.cs.technion.ac.il
References
Bai, S.H., Qi, H.D.: Tackling the flip ambiguity in wireless sensor network localization and beyond. Digital Signal Process. 55(C), 85–97 (2016)
Barlow, R.E., Bartholomew, D.J., Bremner, J.M., Brunk, H.D.: Statistical Inference Under Order Restrictions: The Theory and Application of Isotonic Regression. Wiley, New York (1973)
Berman, H.M., Westbrook, J., Feng, Z.K., Gilliland, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E.: The protein data bank. Nucl. Acids Res. 28(1), 235–242 (2000)
Biswas, P., Liang, T.C., Toh, K.C., Ye, Y., Wang, T.C.: Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Trans. Autom. Sci. Eng. 3(4), 360–371 (2006)
Biswas, P., Ye, Y.Y.: Semidefinite programming for ad hoc wireless sensor network localization. In: Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, pp. 46–54 (2004)
Bogdan, M., Van, D.B.E., Sabatti, C., Su, W., Candès, E.J.: SLOPE-adaptive variable selection via convex optimization. Ann. Appl. Stat. 9(3), 1103–1140 (2015)
Borg, I., Groenen, P.: Modern multidimensional scaling: theory and applications. J. Educ. Meas. 40(3), 277–280 (2010)
Borg, I., Groenen, P.J.F.: Modern Multidensional Scaling. Springer, Berlin (2005)
Cox, T.F., Cox, M.A.A.: Multidimensional Scaling. Chapman and Hall/CRC, London (2000)
Dattorro, J.: Convex Optimization and Euclidean Distance Geometry. Meboo, Mountain View (2005)
De Leeuw, J.: Applications of convex analysis to multidimensional scaling. Recent Developments in Statistics, pp. 133–146 (2011)
De Leeuw, J., Mair, P.: Multidimensional scaling using majorization: SMACOF in R. J. Stat. Softw. 31(3), 1–30 (2009)
Ding, C., Qi, H.D.: Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation. Comput. Optim. Appl. 66(1), 187–218 (2017)
Elte, E.L.: The Semiregular Polytopes of the Hyperspaces. Hoitsema, Groningen (1912)
Fang, X.Y., Toh, K.C.: Using a distributed SDP approach to solve simulated protein molecular conformation problems. In: Distance Geometry, pp. 351–376. Springer (2013)
Gao, Y., Sun, D.F.: Calibrating least squares covariance matrix problems with equality and inequality constraints. SIAM J. Matrix Anal. 31(3), 1432–1457 (2009)
Glunt, W., Hayden, T.L., Raydan, M.: Molecular conformations from distance matrices. J. Comput. Chem. 14(1), 114–120 (1993)
Gower, J.C.: Properties of Euclidean and non-Euclidean distance matrices. Linear Algebra Appl. 67(none), 81–97 (1985)
Kruskal, J.B.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1), 1–27 (1964)
Kruskal, J.B.: Nonmetric multidimensional scaling: a numerical method. Psychometrika 29(2), 115–129 (1964)
Leung, N., Hang, Z., Toh, K.C.: An SDP-based divide-and-conquer algorithm for large-scale noisy anchor-free graph realization. SIAM J. Sci. Comput. 31(6), 4351–4372 (2009)
Li, Q.N., Qi, H.D.: An inexact smoothing Newton method for Euclidean distance matrix optimization under ordinal constraints. J. Comput. Math. 35(4), 469–485 (2017)
Liberti, L., Lavor, C., Maculan, N., Mucherino, A.: Euclidean distance geometry and applications. Quant. Biol. 56(1), 3–69 (2012)
Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I. Springer, Berlin (2006)
Qi, H.D.: A semismooth Newton’s method for the nearest Euclidean distance matrix problem. SIAM J. Matrix Anal. Appl. 34(34), 67–93 (2013)
Qi, H.D.: Conditional quadratic semidefinite programming: examples and methods. J. Oper. Res. Soc. China 2(2), 143–170 (2014)
Qi, H.D., Xiu, N.H., Yuan, X.M.: A Lagrangian dual approach to the single-source localization problem. IEEE Trans. Signal Process. 61(15), 3815–3826 (2013)
Qi, H.D., Yuan, X.M.: Computing the nearest Euclidean distance matrix with low embedding dimensions. Math. Program. 147(1–2), 351–389 (2014)
Rosman, G., Bronstein, A.M., Bronstein, M.M., Sidi, A., Kimmel, R.: Fast multidimensional scaling using vector extrapolation. Technical report, Computer Science Department, Technion, (2008)
Schoenberg, I.J.: Remarks to maurice frechet’s article “sur la definition axiomatique d’une classe d’espace distances vectoriellement applicable sur l’espace de hilbert. Ann. Math. 36(3), 724–732 (1935)
Toh, K.C.: An inexact primal-dual path-following algorithm for convex quadratic SDP. Math. Program. 112(1), 221–254 (2008)
Torgerson, W.S.: Multidimensional scaling: I. Theory and method. Psychometrika 17(4), 401–419 (1952)
Young, G., Householder, A.S.: Discussion of a set of points in terms of their mutual distances. Psychometrika 3(1), 19–22 (1938)
Zhai, F.Z., Li, Q.N.: A Euclidean distance matrix model for protein molecular conformation. J. Global Optim. (2019)
Zhou, S.L., Xiu, N.H., Qi, H.D.: A fast matrix majorization-projection method for constrained stress minimization in MDS. IEEE Trans. Signal Process. 66(3), 4331–4346 (2018)
Zhou, S.L., Xiu, N.H., Qi, H.D.: Robust Euclidean embedding via EDM optimization. Math. Program. Comput. (2019)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Q.-N. Li: research was supported by the National Science Foundation of China (No. 11671036).
Appendix: Readmap of proof
Appendix: Readmap of proof
Due to the fact that \({\mathcal {K}}_+^n(n-2)\subset {\mathcal {K}}_+^n(n-1)={\mathcal {K}}_+^n\), we only need to show the result holds for \(r=n-2\). For any ordinal constraints, we consider the situation when \(r=n-2\). We will construct n points in \(\mathbb {R}^{n-2}\) satisfying that the largest pairwise distance is strictly larger than the others and the rest are equal to each other. By relabelling the vertices properly, the resulting EDM can satisfy the required ordinal constraints.
To construct the points as required, inspired by the regular simplex whose edges are equal to each other, we start from a regular simplex in \(\mathbb {R}^{n-3}\), and add two extra vertices in a higher dimensional space such that each of the extra vertices together with the original regular simplex form a regular simplex in a higher dimensional space \(\mathbb {R}^{n-2}\). In this way, the distance between the two additional vertices is proved to be larger than the rest distances. It results in a polyhedron which is formed by two regular simplices in \(\mathbb {R}^{n-1}\) who share \(n-2\) vertices. The n vertices of the polyhedron are the points that we are looking for. Figures. 10, 11 and 12 demonstrate the process when \(n=5\).
Before we give the details of the proof of Theorem 3, we start with the following lemmas.
Lemma 2
Let\(\{b_k\}\)be a nonnegative sequence generated as follows
where\(t>0\). Then\(\{b_k\}\)is an increasing sequence and converges to\(\frac{\sqrt{2}}{2}t\).
Proof of Lemma 2
The update in (42) implies that
The second equation in (42) also gives the increasing order of \(\{b_k\}\), i.e.,
In other words, \(\{b_k\}\) is an increasing sequence with upper bound t. Therefore, \(\{b_k\}\) has a limit as \(k\rightarrow \infty\). Suppose the limit is b. By taking limit in the second equation in (42), we can get
This gives the solution \(b=\frac{\sqrt{2}}{2}\)t. The proof is finished. \(\square\)
Remark
Equations in (42) imply that for all \(k=1,2,\ldots\),
Lemma 3
Let\({x_1^{(k)},\ldots ,x_{k+1}^{(k)}}\subseteq \mathbb {R}^k\)be a set of points satisfying
where the superscript stands for the dimension of vectors. Denote the centroid of\(\{x_1^{(k)},\ldots , x_{k+1}^{(k)}\}\)as\(O^{(k)}\). Let the distance between\(O^{(k)}\)and\(x_i^{(k)}\)be\(b_k\). We have the following equations
Proof of Lemma 3
Without the loss of generality, let \(O^{(k)}\) lie in the origin. Then \(b_k=\Vert O^{(k)}-x_i^{(k)}\Vert , \ \ i=1,\ldots ,k+1\). Now define \({x_1^{(k+1)},\ldots ,x_{k+2}^{(k+1)}}\subseteq \mathbb {R}^{k+1}\) as follows
By letting \(h_{k+1}=\sqrt{t^2-b_k^2}\), there is
Let the centroid of \({x_1^{(k+1)},\ldots ,x_{k+2}^{(k+1)}}\) be \(O^{(k+1)}\). There is
Notice
We get \(O^{(k+1)}=\frac{k+1}{k+2}\hat{O}^{(k)}+\frac{1}{k+2}x_{k+2}^{(k+1)}\) which implies that \(O^{(k+1)}\) lies between \(\hat{O}^{(k)}\) and \(x_{k+2}^{(k+1)}\). The geometric relation of \(O^{(k+1)},\hat{O}^{(k)},x_i^{(k+1)},x_{k+2}^{(k+1)}\) is demonstrated in Fig. 9.
Consequently, we have the following relationship
In particular, if k=1, \(b_1=\frac{t}{2}\). The proof is finished. \(\square\)
Next we give the proof of Theorem 3.
Proof of Theorem 3
Note that \({\mathcal {K}}_+^n(n-2)\subset {\mathcal {K}}_+^n(n-1)={\mathcal {K}}_+^n\), we only need to show the result holds for \(r=n-2\).
Let \(r=n-2\). To show the result, let’s first construct a nontrivial feasible solution D satisfying
To the end, we can first pick up a set of points \(\{ x_1^{(n-3)},\ldots ,x_{n-2}^{(n-3)} \}\subseteq \mathbb {R}^{n-3}\) satisfying (see Fig. 10 for \(n=5\))
This is already proved in Theorem 1. Moreover, denote the centroid of \(x_1^{(n-3)}, \ldots , x_{n-2}^{(n-3)}\) as \(O^{(n-3)}\). Suppose \(O^{(n-3)}\) is located at the origin. As we defined above, let
Now let \(x_i^{(n-2)}\in \mathbb {R}^{n-2},\ i=1,\ldots ,n-1\) be generated as follows (see Figs. 11 and 12 for \(n=5\))
Then we have
where
By letting \(h=\sqrt{t^2-b_{n-3}^2}\), we get
Next, we will show that
By Lemma 3, we have (43) for \(b_k\) and \(b_{k+1}\), implying that \(b_{n-3}<\frac{\sqrt{2}t}{2}\) by Lemma 2. Therefore, \(h=\sqrt{t^2-b_{n-3}^2}>\frac{\sqrt{2}t}{2}\) for all \(n\ge 4\). In other words,
Consequently, the EDM generated by \(\{x_1^{(n-2)},\ldots ,x_n^{(n-2)}\}\subset \mathbb {R}^{n-2}\) satisfies (45). For any \(\pi (n)=\{(i_1,j_1),\ldots ,(i_m,j_m)\}\), to get a nontrivial solution of \(F(\pi (n),n-2)\), we construct
Then \(\hat{D}\) is a nontrivial feasible solution. \(\square\)
Rights and permissions
About this article
Cite this article
Lu, ST., Zhang, M. & Li, QN. Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints. Comput Optim Appl 76, 535–569 (2020). https://doi.org/10.1007/s10589-020-00189-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10589-020-00189-9