Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints | Computational Optimization and Applications Skip to main content
Log in

Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Euclidean distance matrix optimization with ordinal constraints (EDMOC) has found important applications in sensor network localization and molecular conformation. It can also be viewed as a matrix formulation of multidimensional scaling, which is to embed n points in a r-dimensional space such that the resulting distances follow the ordinal constraints. The ordinal constraints, though proved to be quite useful, may result in only zero solution when too many are added, leaving the feasibility of EDMOC as a question. In this paper, we first study the feasibility of EDMOC systematically. We show that if \(r\ge n-2\), EDMOC always admits a nontrivial solution. Otherwise, it may have only zero solution. The latter interprets the numerical observations of ’crowding phenomenon’. Next we overcome two obstacles in designing fast algorithms for EDMOC, i.e., the low-rankness and the potential huge number of ordinal constraints. We apply the technique developed in Zhou et al. (IEEE Trans Signal Process 66(3):4331–4346 2018) to take the low rank constraint as the conditional positive semidefinite cone with rank cut. This leads to a majorization penalty approach. The ordinal constraints are left to the subproblem, which is exactly the weighted isotonic regression, and can be solved by the enhanced implementation of Pool Adjacent Violators Algorithm (PAVA). Extensive numerical results demonstrate the superior performance of the proposed approach over some state-of-the-art solvers.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. The code can be downloaded from http://www-stat.stanford.edu/candes/SortedSL1

  2. The code can be downloaded from http://tosca.cs.technion.ac.il

References

  1. Bai, S.H., Qi, H.D.: Tackling the flip ambiguity in wireless sensor network localization and beyond. Digital Signal Process. 55(C), 85–97 (2016)

    Article  Google Scholar 

  2. Barlow, R.E., Bartholomew, D.J., Bremner, J.M., Brunk, H.D.: Statistical Inference Under Order Restrictions: The Theory and Application of Isotonic Regression. Wiley, New York (1973)

    MATH  Google Scholar 

  3. Berman, H.M., Westbrook, J., Feng, Z.K., Gilliland, G., Bhat, T.N., Weissig, H., Shindyalov, I.N., Bourne, P.E.: The protein data bank. Nucl. Acids Res. 28(1), 235–242 (2000)

    Article  Google Scholar 

  4. Biswas, P., Liang, T.C., Toh, K.C., Ye, Y., Wang, T.C.: Semidefinite programming approaches for sensor network localization with noisy distance measurements. IEEE Trans. Autom. Sci. Eng. 3(4), 360–371 (2006)

    Article  Google Scholar 

  5. Biswas, P., Ye, Y.Y.: Semidefinite programming for ad hoc wireless sensor network localization. In: Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks, pp. 46–54 (2004)

  6. Bogdan, M., Van, D.B.E., Sabatti, C., Su, W., Candès, E.J.: SLOPE-adaptive variable selection via convex optimization. Ann. Appl. Stat. 9(3), 1103–1140 (2015)

    Article  MathSciNet  Google Scholar 

  7. Borg, I., Groenen, P.: Modern multidimensional scaling: theory and applications. J. Educ. Meas. 40(3), 277–280 (2010)

    Article  Google Scholar 

  8. Borg, I., Groenen, P.J.F.: Modern Multidensional Scaling. Springer, Berlin (2005)

    MATH  Google Scholar 

  9. Cox, T.F., Cox, M.A.A.: Multidimensional Scaling. Chapman and Hall/CRC, London (2000)

    Book  Google Scholar 

  10. Dattorro, J.: Convex Optimization and Euclidean Distance Geometry. Meboo, Mountain View (2005)

    MATH  Google Scholar 

  11. De Leeuw, J.: Applications of convex analysis to multidimensional scaling. Recent Developments in Statistics, pp. 133–146 (2011)

  12. De Leeuw, J., Mair, P.: Multidimensional scaling using majorization: SMACOF in R. J. Stat. Softw. 31(3), 1–30 (2009)

    Article  Google Scholar 

  13. Ding, C., Qi, H.D.: Convex Euclidean distance embedding for collaborative position localization with NLOS mitigation. Comput. Optim. Appl. 66(1), 187–218 (2017)

    Article  MathSciNet  Google Scholar 

  14. Elte, E.L.: The Semiregular Polytopes of the Hyperspaces. Hoitsema, Groningen (1912)

    MATH  Google Scholar 

  15. Fang, X.Y., Toh, K.C.: Using a distributed SDP approach to solve simulated protein molecular conformation problems. In: Distance Geometry, pp. 351–376. Springer (2013)

  16. Gao, Y., Sun, D.F.: Calibrating least squares covariance matrix problems with equality and inequality constraints. SIAM J. Matrix Anal. 31(3), 1432–1457 (2009)

    Article  MathSciNet  Google Scholar 

  17. Glunt, W., Hayden, T.L., Raydan, M.: Molecular conformations from distance matrices. J. Comput. Chem. 14(1), 114–120 (1993)

    Article  Google Scholar 

  18. Gower, J.C.: Properties of Euclidean and non-Euclidean distance matrices. Linear Algebra Appl. 67(none), 81–97 (1985)

    Article  MathSciNet  Google Scholar 

  19. Kruskal, J.B.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1), 1–27 (1964)

    Article  MathSciNet  Google Scholar 

  20. Kruskal, J.B.: Nonmetric multidimensional scaling: a numerical method. Psychometrika 29(2), 115–129 (1964)

    Article  MathSciNet  Google Scholar 

  21. Leung, N., Hang, Z., Toh, K.C.: An SDP-based divide-and-conquer algorithm for large-scale noisy anchor-free graph realization. SIAM J. Sci. Comput. 31(6), 4351–4372 (2009)

    Article  MathSciNet  Google Scholar 

  22. Li, Q.N., Qi, H.D.: An inexact smoothing Newton method for Euclidean distance matrix optimization under ordinal constraints. J. Comput. Math. 35(4), 469–485 (2017)

    Article  MathSciNet  Google Scholar 

  23. Liberti, L., Lavor, C., Maculan, N., Mucherino, A.: Euclidean distance geometry and applications. Quant. Biol. 56(1), 3–69 (2012)

    MathSciNet  MATH  Google Scholar 

  24. Mordukhovich, B.S.: Variational Analysis and Generalized Differentiation I. Springer, Berlin (2006)

    Book  Google Scholar 

  25. Qi, H.D.: A semismooth Newton’s method for the nearest Euclidean distance matrix problem. SIAM J. Matrix Anal. Appl. 34(34), 67–93 (2013)

    Article  MathSciNet  Google Scholar 

  26. Qi, H.D.: Conditional quadratic semidefinite programming: examples and methods. J. Oper. Res. Soc. China 2(2), 143–170 (2014)

    Article  MathSciNet  Google Scholar 

  27. Qi, H.D., Xiu, N.H., Yuan, X.M.: A Lagrangian dual approach to the single-source localization problem. IEEE Trans. Signal Process. 61(15), 3815–3826 (2013)

    Article  MathSciNet  Google Scholar 

  28. Qi, H.D., Yuan, X.M.: Computing the nearest Euclidean distance matrix with low embedding dimensions. Math. Program. 147(1–2), 351–389 (2014)

    Article  MathSciNet  Google Scholar 

  29. Rosman, G., Bronstein, A.M., Bronstein, M.M., Sidi, A., Kimmel, R.: Fast multidimensional scaling using vector extrapolation. Technical report, Computer Science Department, Technion, (2008)

  30. Schoenberg, I.J.: Remarks to maurice frechet’s article “sur la definition axiomatique d’une classe d’espace distances vectoriellement applicable sur l’espace de hilbert. Ann. Math. 36(3), 724–732 (1935)

    Article  MathSciNet  Google Scholar 

  31. Toh, K.C.: An inexact primal-dual path-following algorithm for convex quadratic SDP. Math. Program. 112(1), 221–254 (2008)

    Article  MathSciNet  Google Scholar 

  32. Torgerson, W.S.: Multidimensional scaling: I. Theory and method. Psychometrika 17(4), 401–419 (1952)

    Article  MathSciNet  Google Scholar 

  33. Young, G., Householder, A.S.: Discussion of a set of points in terms of their mutual distances. Psychometrika 3(1), 19–22 (1938)

    Article  Google Scholar 

  34. Zhai, F.Z., Li, Q.N.: A Euclidean distance matrix model for protein molecular conformation. J. Global Optim. (2019)

  35. Zhou, S.L., Xiu, N.H., Qi, H.D.: A fast matrix majorization-projection method for constrained stress minimization in MDS. IEEE Trans. Signal Process. 66(3), 4331–4346 (2018)

    Article  MathSciNet  Google Scholar 

  36. Zhou, S.L., Xiu, N.H., Qi, H.D.: Robust Euclidean embedding via EDM optimization. Math. Program. Comput. (2019)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qing-Na Li.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Q.-N. Li: research was supported by the National Science Foundation of China (No. 11671036).

Appendix: Readmap of proof

Appendix: Readmap of proof

Due to the fact that \({\mathcal {K}}_+^n(n-2)\subset {\mathcal {K}}_+^n(n-1)={\mathcal {K}}_+^n\), we only need to show the result holds for \(r=n-2\). For any ordinal constraints, we consider the situation when \(r=n-2\). We will construct n points in \(\mathbb {R}^{n-2}\) satisfying that the largest pairwise distance is strictly larger than the others and the rest are equal to each other. By relabelling the vertices properly, the resulting EDM can satisfy the required ordinal constraints.

To construct the points as required, inspired by the regular simplex whose edges are equal to each other, we start from a regular simplex in \(\mathbb {R}^{n-3}\), and add two extra vertices in a higher dimensional space such that each of the extra vertices together with the original regular simplex form a regular simplex in a higher dimensional space \(\mathbb {R}^{n-2}\). In this way, the distance between the two additional vertices is proved to be larger than the rest distances. It results in a polyhedron which is formed by two regular simplices in \(\mathbb {R}^{n-1}\) who share \(n-2\) vertices. The n vertices of the polyhedron are the points that we are looking for. Figures. 1011 and 12 demonstrate the process when \(n=5\).

Before we give the details of the proof of Theorem 3, we start with the following lemmas.

Lemma 2

Let\(\{b_k\}\)be a nonnegative sequence generated as follows

$$\begin{aligned} \left\{ \begin{array}{ll} b_1 \ \ = \ \ \frac{t}{2} \\ b_k^2 +(\sqrt{t^2-b_k^2}-b_{k+1})^2= & b_{k+1}^2, \quad k=1,2\ldots , \end{array} \right. \end{aligned}$$
(42)

where\(t>0\). Then\(\{b_k\}\)is an increasing sequence and converges to\(\frac{\sqrt{2}}{2}t\).

Proof of Lemma 2

The update in (42) implies that

$$\begin{aligned} 0\le b_k \le t,\quad k=1,2\ldots . \end{aligned}$$

The second equation in (42) also gives the increasing order of \(\{b_k\}\), i.e.,

$$\begin{aligned} b_1<b_2<\cdots<b_k<\ldots . \end{aligned}$$

In other words, \(\{b_k\}\) is an increasing sequence with upper bound t. Therefore, \(\{b_k\}\) has a limit as \(k\rightarrow \infty\). Suppose the limit is b. By taking limit in the second equation in (42), we can get

$$\begin{aligned} t^2=2b\sqrt{t^2-b^2}. \end{aligned}$$

This gives the solution \(b=\frac{\sqrt{2}}{2}\)t. The proof is finished. \(\square\)

Remark

Equations in (42) imply that for all \(k=1,2,\ldots\),

$$\begin{aligned} b_k<\frac{\sqrt{2}}{2}t. \end{aligned}$$

Lemma 3

Let\({x_1^{(k)},\ldots ,x_{k+1}^{(k)}}\subseteq \mathbb {R}^k\)be a set of points satisfying

$$\begin{aligned} \Vert x_i^{(k)}-x_j^{(k)}\Vert =t,\, i\ne j, \end{aligned}$$
(43)

where the superscript stands for the dimension of vectors. Denote the centroid of\(\{x_1^{(k)},\ldots , x_{k+1}^{(k)}\}\)as\(O^{(k)}\). Let the distance between\(O^{(k)}\)and\(x_i^{(k)}\)be\(b_k\). We have the following equations

$$\begin{aligned} \left\{ \begin{array}{ll} b_1 = \frac{t}{2},& \\ b_k^2 +(\sqrt{t^2-b_k^2}-b_{k+1})^2 = b_{k+1}^2,& \ k=1,2\ldots . \end{array} \right. \end{aligned}$$

Proof of Lemma 3

Without the loss of generality, let \(O^{(k)}\) lie in the origin. Then \(b_k=\Vert O^{(k)}-x_i^{(k)}\Vert , \ \ i=1,\ldots ,k+1\). Now define \({x_1^{(k+1)},\ldots ,x_{k+2}^{(k+1)}}\subseteq \mathbb {R}^{k+1}\) as follows

$$\begin{aligned} x_i^{(k+1)}= \begin{bmatrix} x_i^{(k)} \\ 0 \end{bmatrix}, \quad i=1,\ldots ,k+1,\qquad {x^{(k+1)}_{k+2}}= \begin{bmatrix} {O^{(k)}} \\ {h_{k+1}} \end{bmatrix}, \ \ h_{k+1}>0. \end{aligned}$$

By letting \(h_{k+1}=\sqrt{t^2-b_k^2}\), there is

$$\begin{aligned} \Vert x_i^{(k+1)}-x_j^{(k+1)}\Vert =t,\quad \forall \ \ i<j, \ \ i,\ j=1,\ldots ,k+2. \end{aligned}$$
(44)

Let the centroid of \({x_1^{(k+1)},\ldots ,x_{k+2}^{(k+1)}}\) be \(O^{(k+1)}\). There is

$$\begin{aligned} O^{(k+1)}=\frac{1}{k+2}\sum _{i=1}^{k+2}x^{(k+1)}_i. \end{aligned}$$

Notice

$$\begin{aligned} {\hat{O}^{(k)}}:=\begin{bmatrix} O^{(k)}\\ 0 \end{bmatrix} =\frac{1}{k+1}\sum _{i=1}^{k+1}x^{(k+1)}_i. \end{aligned}$$

We get \(O^{(k+1)}=\frac{k+1}{k+2}\hat{O}^{(k)}+\frac{1}{k+2}x_{k+2}^{(k+1)}\) which implies that \(O^{(k+1)}\) lies between \(\hat{O}^{(k)}\) and \(x_{k+2}^{(k+1)}\). The geometric relation of \(O^{(k+1)},\hat{O}^{(k)},x_i^{(k+1)},x_{k+2}^{(k+1)}\) is demonstrated in Fig. 9.

Fig. 9
figure 9

Geometric relation of \(O^{(k+1)},\ \hat{O}^{(k)},\ x_i^{(k+1)},\ x_{k+2}^{(k+1)}\)

Consequently, we have the following relationship

$$\begin{aligned} t^2=b_k^2+(\sqrt{t^2-b_k^2}-b_{k+1})^2. \end{aligned}$$

In particular, if k=1, \(b_1=\frac{t}{2}\). The proof is finished. \(\square\)

Next we give the proof of Theorem 3.

Proof of Theorem 3

Note that \({\mathcal {K}}_+^n(n-2)\subset {\mathcal {K}}_+^n(n-1)={\mathcal {K}}_+^n\), we only need to show the result holds for \(r=n-2\).

Let \(r=n-2\). To show the result, let’s first construct a nontrivial feasible solution D satisfying

$$\begin{aligned} \left\{ \begin{array}{ll} D_{12}>D_{ij},&\quad \forall \ (i,j)\ne (1,2),\ \,\ i<j, \\ D_{ij}=D_{sk},&\quad \forall \ (i,j),\ (s,k)\ne (1,2),\, i<j,\ \, s<k. \end{array} \right. \end{aligned}$$
(45)

To the end, we can first pick up a set of points \(\{ x_1^{(n-3)},\ldots ,x_{n-2}^{(n-3)} \}\subseteq \mathbb {R}^{n-3}\) satisfying (see Fig. 10 for \(n=5\))

$$\begin{aligned} \Vert x_i^{(n-3)}-x_j^{(n-3)}\Vert =t,\quad \forall \ \ i<j,\, i,\ j=1,\ldots ,{n-2}. \end{aligned}$$

This is already proved in Theorem 1. Moreover, denote the centroid of \(x_1^{(n-3)}, \ldots , x_{n-2}^{(n-3)}\) as \(O^{(n-3)}\). Suppose \(O^{(n-3)}\) is located at the origin. As we defined above, let

$$\begin{aligned} b_{n-3}=\Vert O^{(n-3)}-x_i^{(n-3)}\Vert , \quad \forall \ i=1,\ldots ,n-2. \end{aligned}$$

Now let \(x_i^{(n-2)}\in \mathbb {R}^{n-2},\ i=1,\ldots ,n-1\) be generated as follows (see Figs. 11 and 12 for \(n=5\))

$$\begin{aligned} x_i^{(n-2)}= \begin{bmatrix} x_i^{(n-3)} \\ 0 \end{bmatrix}, \quad i=1,\ldots ,n-2,\ \ x_{n-1}^{(n-2)}= \begin{bmatrix} {O^{(n-3)}} \\ h \end{bmatrix} , \qquad x_n^{(n-2)}= \begin{bmatrix} {O^{(n-3)}} \\ -h \end{bmatrix} , \quad h>0. \end{aligned}$$

Then we have

$$\begin{aligned} (\hat{O}^{(n-3)}-x_{n-1}^{(n-2)})^T(x_i^{(n-2)}-x_j^{(n-2)})=0,\quad \forall \, i<j,\ \ i,\ j=1,\ldots ,{n-2}, \end{aligned}$$

where

$$\begin{aligned} {\hat{O}^{(n-3)}=\begin{bmatrix} O^{(n-3)}\\ 0 \end{bmatrix} \in \mathbb {R}^{n-2}.} \end{aligned}$$

By letting \(h=\sqrt{t^2-b_{n-3}^2}\), we get

$$\begin{aligned} \Vert x_i^{(n-2)}-x_j^{(n-2)}\Vert =t,\quad \forall \ i<j, \ \ i,\ j=1,\ldots ,n,\ \ (i,j)\ne {(n-1,n)}. \end{aligned}$$

Next, we will show that

$$\begin{aligned} \Vert x_{n-1}^{(n-2)}-x_{n}^{(n-2)}\Vert >t. \end{aligned}$$

By Lemma 3, we have (43) for \(b_k\) and \(b_{k+1}\), implying that \(b_{n-3}<\frac{\sqrt{2}t}{2}\) by Lemma 2. Therefore, \(h=\sqrt{t^2-b_{n-3}^2}>\frac{\sqrt{2}t}{2}\) for all \(n\ge 4\). In other words,

$$\begin{aligned} \Vert x_n^{(n-2)}-x_{n-1}^{(n-2)}\Vert =2h>\sqrt{2}t>t. \end{aligned}$$

Consequently, the EDM generated by \(\{x_1^{(n-2)},\ldots ,x_n^{(n-2)}\}\subset \mathbb {R}^{n-2}\) satisfies (45). For any \(\pi (n)=\{(i_1,j_1),\ldots ,(i_m,j_m)\}\), to get a nontrivial solution of \(F(\pi (n),n-2)\), we construct

$$\begin{aligned} \hat{D}_{i_1j_1}=\Vert x_n^{(n-2)}-x_{n-1}^{(n-2)}\Vert ^2, \quad \hat{D}_{ij}=\Vert x_i^{(n-2)}-x_j^{(n-2)}\Vert ^2, \quad (i,j)\in \pi (n)\backslash (i_1,j_1). \end{aligned}$$

Then \(\hat{D}\) is a nontrivial feasible solution. \(\square\)

Fig. 10
figure 10

\(x_1^{(2)},x_2^{(2)},x_3^{(2)}\)

Fig. 11
figure 11

\(x_1^{(3)},\ldots , x_4^{(3)}\)

Fig. 12
figure 12

\(x_1^{(3)},\ldots , x_5^{(3)}\)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lu, ST., Zhang, M. & Li, QN. Feasibility and a fast algorithm for Euclidean distance matrix optimization with ordinal constraints. Comput Optim Appl 76, 535–569 (2020). https://doi.org/10.1007/s10589-020-00189-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00189-9

Keywords

Navigation