Probabilistic Analysis of the Grassmann Condition Number | Foundations of Computational Mathematics Skip to main content
Log in

Probabilistic Analysis of the Grassmann Condition Number

  • Published:
Foundations of Computational Mathematics Aims and scope Submit manuscript

Abstract

We analyze the probability that a random m-dimensional linear subspace of \(\mathbb{R}^{n}\) both intersects a regular closed convex cone \(C\subseteq\mathbb{R}^{n}\) and lies within distance α of an m-dimensional subspace not intersecting C (except at the origin). The result is expressed in terms of the spherical intrinsic volumes of the cone C. This allows us to perform an average analysis of the Grassmann condition number for the homogeneous convex feasibility problem ∃xC∖0:Ax=0. The Grassmann condition number is a geometric version of Renegar’s condition number, which we have introduced recently in Amelunxen and Bürgisser (SIAM J. Optim. 22(3):1029–1041 (2012)). We thus give the first average analysis of convex programming that is not restricted to linear programming. In particular, we prove that if the entries of \(A\in\mathbb {R}^{m\times n}\) are chosen i.i.d. standard normal, then for any regular cone C, we have . The proofs rely on various techniques from Riemannian geometry applied to Grassmann manifolds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. In fact, (P) and (D) are only weak alternatives, as it may be that both (P) and (D) are satisfiable. But the Lebesgue measure of the set of these ill-posed inputs in \(\mathbb{R}^{m\times n}\) is zero.

References

  1. D. Amelunxen, Geometric analysis of the condition of the convex feasibility problem, Ph.D. thesis, Univ. Paderborn (2011).

  2. D. Amelunxen, P. Bürgisser, Intrinsic volumes of symmetric cones, arXiv:1205.1863.

  3. D. Amelunxen, P. Bürgisser, A coordinate-free condition number for convex programming, SIAM J. Optim. 22(3), 1029–1041 (2012).

    Article  MATH  MathSciNet  Google Scholar 

  4. A. Belloni, R.M. Freund, A geometric analysis of Renegar’s condition number, and its interplay with conic curvature, Math. Program., Ser. A 119(1), 95–107 (2009).

    Article  MATH  MathSciNet  Google Scholar 

  5. T. Bonnesen, W. Fenchel, Theorie der Konvexen Körper (Springer, Berlin, 1974). Berichtigter Reprint.

    Book  MATH  Google Scholar 

  6. W.M. Boothby, An Introduction to Differentiable Manifolds and Riemannian Geometry, 2nd edn. Pure and Applied Mathematics, Vol. 120 (Academic Press, Orlando, 1986).

    MATH  Google Scholar 

  7. S. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004).

    Book  MATH  Google Scholar 

  8. P. Bürgisser, Smoothed analysis of condition numbers, in Foundations of Computational Mathematics, Hong Kong, 2008. London Math. Soc. Lecture Note Ser., Vol. 363 (Cambridge University Press, Cambridge, 2009), pp. 1–41.

    Google Scholar 

  9. P. Bürgisser, F. Cucker, Condition: The Geometry of Numerical Algorithms. Grundlehren der Mathematischen Wissenschaften, Bd. 349 (Springer, Berlin, 2013).

    Google Scholar 

  10. P. Bürgisser, F. Cucker, M. Lotz, The probability that a slightly perturbed numerical analysis problem is difficult, Math. Comp. 77(263), 1559–1583 (2008).

    Article  MATH  MathSciNet  Google Scholar 

  11. I. Chavel, Riemannian Geometry, 2nd edn. Cambridge Studies in Advanced Mathematics, Vol. 98 (Cambridge University Press, Cambridge, 2006). A modern introduction.

    Book  MATH  Google Scholar 

  12. Z. Chen, J.J. Dongarra, Condition numbers of Gaussian random matrices, SIAM J. Matrix Anal. Appl. 27(3), 603–620 (2005) (electronic).

    Article  MathSciNet  Google Scholar 

  13. D. Cheung, F. Cucker, A new condition number for linear programming, Math. Program., Ser. A 91(1), 163–174 (2001).

    MATH  MathSciNet  Google Scholar 

  14. F. Cucker, J. Peña, A primal-dual algorithm for solving polyhedral conic systems with a finite-precision machine, SIAM J. Optim. 12(2), 522–554 (2001/2002) (electronic).

    Article  Google Scholar 

  15. J.W. Demmel, The probability that a numerical analysis problem is difficult, Math. Comp. 50, 449–480 (1988).

    Article  MATH  MathSciNet  Google Scholar 

  16. M.P. do Carmo, Riemannian Geometry. Mathematics: Theory & Applications (Birkhäuser Boston, Boston, 1992). Translated from the second Portuguese edition by Francis Flaherty.

    Book  MATH  Google Scholar 

  17. M. Epelman, R.M. Freund, A new condition measure, preconditioners, and relations between different measures of conditioning for conic linear systems, SIAM J. Optim. 12(3), 627–655 (2002) (electronic).

    Article  MATH  MathSciNet  Google Scholar 

  18. H. Federer, Geometric Measure Theory. Die Grundlehren der mathematischen Wissenschaften, Bd. 153 (Springer, New York, 1969).

    MATH  Google Scholar 

  19. S. Filipowski, On the complexity of solving feasible linear programs specified with approximate data, SIAM J. Optim. 9(4), 1010–1040 (1999). Dedicated to John E. Dennis Jr. on his 60th birthday.

    Article  MATH  MathSciNet  Google Scholar 

  20. R.M. Freund, F. Ordóñez, On an extension of condition number theory to nonconic convex optimization, Math. Oper. Res. 30(1), 173–194 (2005).

    Article  MATH  MathSciNet  Google Scholar 

  21. R.M. Freund, J.R. Vera, Condition-based complexity of convex optimization in conic linear form via the ellipsoid algorithm, SIAM J. Optim. 10(1), 155–176 (1999) (electronic).

    Article  MATH  MathSciNet  Google Scholar 

  22. R.M. Freund, J.R. Vera, Some characterizations and properties of the “distance to ill-posedness” and the condition measure of a conic linear system, Math. Program., Ser. A 86(2), 225–260 (1999).

    Article  MATH  MathSciNet  Google Scholar 

  23. F. Gao, D. Hug, R. Schneider, Intrinsic volumes and polar sets in spherical space, Math. Notae 41, 159–176 (2003), (2001/2002). Homage to Luis Santaló. Vol. 1 (Spanish).

    MathSciNet  Google Scholar 

  24. S. Glasauer, Integralgeometrie konvexer Körper im sphärischen Raum, Thesis, Univ. Freiburg i. Br. (1995).

  25. S. Glasauer, Integral geometry of spherically convex bodies, Diss. Summ. Math. 1(1–2), 219–226 (1996).

    MathSciNet  Google Scholar 

  26. G.H. Golub, C.F. Van Loan, Matrix Computations, 4th edn. Johns Hopkins Studies in the Mathematical Sciences (Johns Hopkins University Press, Baltimore, 2013).

    MATH  Google Scholar 

  27. S. Helgason, Differential Geometry, Lie Groups, and Symmetric Spaces. Pure and Applied Mathematics, Vol. 80 (Academic Press/Harcourt Brace Jovanovich, New York, 1978).

    MATH  Google Scholar 

  28. R.A. Horn, C.R. Johnson, Matrix Analysis (Cambridge University Press, Cambridge, 1990). Corrected reprint of the 1985 original.

    MATH  Google Scholar 

  29. R. Howard, The kinematic formula in Riemannian homogeneous spaces, Mem. Amer. Math. Soc. 106(509), vi + 69 (1993).

    Google Scholar 

  30. D.A. Klain, G.-C. Rota, Introduction to Geometric Probability. Lezioni Lincee (Cambridge University Press, Cambridge, 1997). [Lincei Lectures]

    MATH  Google Scholar 

  31. S. Kobayashi, K. Nomizu, Foundations of Differential Geometry, vol. I (Interscience/Wiley, New York/London, 1963).

    MATH  Google Scholar 

  32. F. Morgan, Geometric Measure Theory, 2nd edn. (Academic Press, San Diego, 1995). A beginner’s guide.

    MATH  Google Scholar 

  33. M. Moszyńska, Selected Topics in Convex Geometry (Birkhäuser Boston, Boston, 2006). Translated and revised from the 2001 Polish original.

    Google Scholar 

  34. J. Peña, Understanding the geometry of infeasible perturbations of a conic linear system, SIAM J. Optim. 10(2), 534–550 (2000) (electronic).

    Article  MATH  MathSciNet  Google Scholar 

  35. J. Peña, A characterization of the distance to infeasibility under block-structured perturbations, Linear Algebra Appl. 370, 193–216 (2003).

    Article  MATH  MathSciNet  Google Scholar 

  36. J. Peña, J. Renegar, Computing approximate solutions for convex conic systems of constraints, Math. Program., Ser. A 87(3), 351–383 (2000).

    Article  MATH  Google Scholar 

  37. J. Renegar, Some perturbation theory for linear programming, Math. Program., Ser. A 65(1), 73–91 (1994).

    Article  MATH  MathSciNet  Google Scholar 

  38. J. Renegar, Incorporating condition measures into the complexity theory of linear programming, SIAM J. Optim. 5(3), 506–524 (1995).

    Article  MATH  MathSciNet  Google Scholar 

  39. J. Renegar, Linear programming, complexity theory and elementary functional analysis, Math. Program., Ser. A 70(3), 279–351 (1995).

    MATH  MathSciNet  Google Scholar 

  40. R. Schneider, Convex Bodies: The Brunn-Minkowski Theory. Encyclopedia of Mathematics and Its Applications, Vol. 44 (Cambridge University Press, Cambridge, 1993).

    Book  MATH  Google Scholar 

  41. R. Schneider, W. Weil, Stochastic and Integral Geometry. Probability and Its Applications (New York) (Springer, Berlin, 2008).

    Book  MATH  Google Scholar 

  42. S. Smale, Complexity theory and numerical analysis, in Acta Numerica, 1997. Acta Numer., Vol. 6 (Cambridge University Press, Cambridge, 1997), pp. 523–551.

    Google Scholar 

  43. M. Spivak, Calculus on Manifolds. A Modern Approach to Classical Theorems of Advanced Calculus (Benjamin, New York/Amsterdam, 1965).

    MATH  Google Scholar 

  44. M. Spivak, A Comprehensive Introduction to Differential Geometry. Vol. I, 3rd edn. (Publish or Perish, Houston, 1999).

    Google Scholar 

  45. R.P. Stanley, Log-concave and unimodal sequences in algebra, combinatorics, and geometry, in Graph Theory and Its Applications: East and West, Jinan, 1986. Ann. New York Acad. Sci., Vol. 579 (N.Y. Acad. Sci., New York, 1989), pp. 500–535.

    Google Scholar 

  46. J.A. Thorpe, Elementary Topics in Differential Geometry. Undergraduate Texts in Mathematics (Springer, New York, 1994).

    Google Scholar 

  47. J.R. Vera, Ill-posedness and the complexity of deciding existence of solutions to linear programs, SIAM J. Optim. 6(3), 549–569 (1996).

    Article  MATH  MathSciNet  Google Scholar 

  48. J.R. Vera, On the complexity of linear programming under finite precision arithmetic, Math. Program., Ser. A 80(1), 91–123 (1998).

    Article  MATH  MathSciNet  Google Scholar 

  49. J.C. Vera, J.C. Rivera, J. Peña, Y. Hui, A primal-dual symmetric relaxation for homogeneous conic systems, J. Complex. 23(2), 245–261 (2007).

    Article  MATH  Google Scholar 

  50. H. Weyl, On the volume of tubes, Am. J. Math. 61(2), 461–472 (1939).

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We are grateful to one of the anonymous referees for criticism that has led to a substantial improvement of the paper’s presentation.

This work has been supported by the DFG Research Training Group on Scientific Computation GRK 693 (PaSCo GK) and DFG grants BU 1371/2-2, AM 386/1-1, and AM 386/1-2.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dennis Amelunxen.

Additional information

Communicated by Michael J. Todd.

Dedicated to Mike Shub, on the occasion of his 70th birthday.

Appendices

Appendix A: Averaging the Twisted Characteristic Polynomial

Recall the setting of Sect. 4.2. Let \(A\in\mathbb{R}^{k\times k}\) and \(\varphi\colon\mathbb{R} ^{k}\to\mathbb{R} ^{k},x\mapsto Ax\). Using the notation \(\operatorname{ch}_{\ell}(A,t) = \operatorname{ch}_{Y_{0}}(\varphi,t)\), where \(Y_{0} =\mathbb{R} ^{\ell}\times0\), we have

$$ \underset{Y}{\mathbb{E}} \bigl[ \operatorname{ch}_Y( \varphi ,t) \bigr] = \underset{Q}{\mathbb{E}} \bigl[ \operatorname{ch}_\ell \bigl(Q^{\mathrm{T}}A Q,t\bigr) \bigr] . $$
(A.1)

(The reason is that \(QY_{0}\in\operatorname{Gr}_{k,\ell}\) is uniform random when QO(k) is uniformly chosen at random.) This equality allows us to prove Theorem 4.1 with basic matrix calculus. The main idea is to use the multilinearity of the determinant and the invariance of the coefficients σ i (A) under similarity transformations, to show that the coefficients of \(\mathbb{E}[\operatorname{ch}_{\ell}(Q^{\mathrm{T}}AQ,t)]\) are linear combinations of the σ i (A). We then compute the coefficients of these linear combinations by choosing for A scalar multiples of the identity matrix.

In the following, we use the notation [k]:={1,…,k}, and we denote by \(\binom{[k]}{i}\) the set of all i-element subsets of [k]. For \(A\in\mathbb{R}^{k\times k}\) and \(J\in\binom {[k]}{i}\), we denote by \(\operatorname{pm}_{J}(A) := \det(A_{J})\) the Jth principal minor of A, where A J denotes the submatrix of A obtained by selecting the rows and columns of A whose indices lie in J. It is well known (cf. for example [28, Theorem 1.2.12]) that σ i (A) is the sum of all principal minors of A of size i, i.e.,

$$ \sigma_i(A) = \sum_{J\in\binom{[k]}{i}} \operatorname{pm}_J(A) . $$
(A.2)

For 0≤k, the th leading principal minor \(\operatorname{lpm}_{\ell}(A) := \operatorname{pm}_{[\ell]}(A)\) is defined as the principal minor for J=[]. Note that \(\operatorname{lpm}_{\ell}(A) = \det_{\mathbb {R}^{\ell}\times 0}(\varphi)\).

Lemma A.1

Let \(A\in\mathbb{R}^{k\times k}\), and let QO(k) be chosen uniformly at random. Then, for all , we have

$$ \underset{Q}{\mathbb{E}} \bigl[\operatorname{pm}_J \bigl(Q^{\mathrm{T}} A Q\bigr) \bigr] = \underset{Q}{\mathbb{E}} \bigl[ \operatorname{lpm}_\ell\bigl(Q^{\mathrm{T}} A Q\bigr) \bigr] = \frac{1}{\binom{k}{\ell}} \sigma_\ell(A) . $$

Proof

For the first equality let J={j 1,…,j }, j 1<⋯<j , and let π be any permutation of [k] such that π(i)=j i for all i=1,…,. If M π denotes the permutation matrix corresponding to π, i.e., M π e i =e π(i), then \(A_{J}=(M_{\pi}^{\mathrm{T}} A M_{\pi})_{[\ell]}\), and therefore \(\operatorname{pm}_{J}(A)=\operatorname{lpm}_{\ell}( M_{\pi}^{\mathrm{T}} A M_{\pi})\). This implies

$$ \underset{Q}{\mathbb{E}} \bigl[\operatorname{pm}_J \bigl(Q^{\mathrm{T}} A Q\bigr) \bigr] = \underset{Q}{\mathbb{E}} \bigl[ \operatorname{lpm}_\ell\bigl(M_\pi^{\mathrm{T}} Q^{\mathrm{T}} A Q M_\pi \bigr) \bigr] = \underset{Q}{\mathbb{E}} \bigl[\operatorname{lpm}_\ell\bigl(Q^{\mathrm{T}} A Q\bigr) \bigr] , $$

where we have used the fact that right multiplication by the fixed element M π leaves the uniform distribution on O(k) invariant. This implies

which was to be shown. □

Proof of Theorem 4.1

Statement (4.4) follows from (A.1) and Lemma A.1.

For proving (4.5), we start with a general observation. By multilinearity, we write the determinant of a matrix with columns v 1,…,v i−1,v i +αe i ,v i+1,…,v n in the form

$$\begin{aligned} &\det(v_1,\ldots,v_{i-1},v_i+\alpha e_i,v_{i+1},\ldots,v_n)\\ &\quad{}= \det(v_1,\ldots,v_n) + \alpha\det(v_1,\ldots,v_{i-1},e_i, v_{i+1},\ldots,v_n) . \end{aligned}$$

Using this repeatedly, we obtain

$$ \det\bigl(A+\operatorname{diag}(\alpha_1,\ldots, \alpha_n)\bigr) = \sum_{J\subseteq[k]} \operatorname{pm}_J(A)\cdot\prod_{i\in[k]\setminus J} \alpha_i . $$
(A.3)

By (4.3) we can express the twisted characteristic polynomial as

$$\operatorname{ch}_\ell(A,t) =t^{k-\ell}\cdot\det\bigl(A+ \operatorname {diag}(-t,\ldots ,-t,1/t,\ldots,1/t)\bigr) . $$

Applying (A.3), we expand this to obtain

$$ \operatorname{ch}_\ell(A,t) = \sum _{J\subseteq[k]} (-1)^{c_1(J)}\cdot \operatorname{pm}_J(A) \cdot t^{c_2(J)} , $$
(A.4)

where \(c_{1},c_{2}\colon2^{[k]}\to\mathbb{N}\) are some integer-valued functions on the power set of [k]. Averaging the twisted characteristic polynomial with the help of Lemma A.1 yields

$$\begin{aligned} \underset{Q}{\mathbb{E}} \bigl[\operatorname{ch}_\ell \bigl(Q^{\mathrm{T}} A Q,t\bigr) \bigr] & = \sum_{J\subseteq[k]} (-1)^{c_1(J)}\cdot \underset{Q}{\mathbb{E}} \bigl[\operatorname{pm}_J \bigl(Q^{\mathrm{T}} A Q\bigr) \bigr]\cdot t^{c_2(J)} \\ &{} = \sum_{J\subseteq[k]} \frac{(-1)^{c_1(J)}}{\binom{k}{|J|}}\cdot \sigma_{|J|}(A) t^{c_2(J)} = \sum_{i,j=0}^k \tilde{d}_{ij}\cdot\sigma_{k-j}(A)\cdot t^{k-i} , \end{aligned}$$

for some rational constants \(\tilde{d}_{ij}\). It remains to prove that the \(\tilde{d}_{ij}\) coincide with the d ij defined in Theorem 4.1.

To compute the \(\tilde{d}_{ij}\), we choose A=sI k . Then, \(\operatorname{ch}_{\ell}(s I_{k},t) = (s-t)^{\ell}\cdot(1+s t)^{k-\ell}\). Since , and Q T sI k Q=sI k , we get

$$ (s-t)^\ell\cdot(1+s t)^{k-\ell} = \operatorname{ch}_\ell(s I_k,t) = \underset{Q}{\mathbb{E}} \bigl[\operatorname{ch}_\ell \bigl(Q^{\mathrm{T}} s I_k Q,t\bigr) \bigr] = \sum _{i,j=0}^k \tilde{d}_{ij} \binom{k}{j} s^{k-j} t^{k-i} . $$

Let us expand the first term:

$$ \begin{aligned}[b] (s-t)^\ell\cdot(1 + s t)^{k-\ell} & = \left(\sum_{\lambda=0}^\ell\binom{\ell}{\lambda} (-1)^{\ell -\lambda} s^\lambda t^{\ell-\lambda}\right)\cdot \left(\sum_{\mu=0}^{k-\ell} \binom{k-\ell}{\mu} s^\mu t^\mu \right) \\ &{} = \sum_{\lambda=0}^\ell\sum_{\mu=0}^{k-\ell} (-1)^{\ell -\lambda} \binom{\ell}{\lambda} \binom{k-\ell}{\mu} s^{\lambda+\mu} t^{\ell-\lambda+\mu} \\ &\!\!\!\!\!\!\!\!\!\!\!\!\! {}\stackrel{\substack{i=k-\ell+\lambda-\mu\\ j=k-\lambda-\mu}}{=} \sum_{\substack{i,j=0\\i+j+\ell\equiv0\\\pmod{2}}}^k (-1)^{\frac{i-j}{2}-\frac{\ell}{2}}\binom{\ell}{\frac {i-j}{2}+\frac{\ell}{2}} \binom{k-\ell}{k-\frac{i+j+\ell}{2}} s^{k-j} t^{k-i} , \end{aligned} $$
(A.5)

where again we interpret \(\binom{n}{m}=0\) if m<0 or m>n, i.e., the above summation over i, j in fact only runs over the rectangle determined by the inequalities \(0\leq\frac{i-j}{2}+\frac{\ell}{2}\leq\ell\) and \(0\leq k-\frac{i+j+\ell}{2}\leq k-\ell\). (See Fig. 1 for an illustration of the change of summation.) Note that the reverse substitution is given by \(\lambda=\frac {i-j+\ell}{2}\) and \(\mu=k-\frac{i+j+\ell}{2}\).

Fig. 1
figure 1

Illustration of the change of summation in (A.5) (k=8, =5)

Comparing the coefficients of the above two expressions for (st)⋅(1+st)k reveals that indeed \(\tilde{d}_{ij}=d_{ij}\) as defined in Theorem 4.1. This completes the proof of (4.5).

For the last claim (4.6) in Theorem 4.1, we define the positive characteristic polynomial

$$ \operatorname{ch}_\ell^+(A,t) = \det \left(\begin{array}{c@{\quad}c} A_1+tI_\ell& A_2 \\ tA_3 & tA_4+I_{k-\ell} \end{array}\right) $$

by replacing −t by t in (4.3). By the same reasoning as for (A.4), we get

$$ \operatorname{ch}^+_\ell(A,t) = \sum_{J\subseteq[k]} \operatorname {pm}_J(A)\cdot t^{c_2(J)} , $$
(A.6)

with the same function \(c_{2}\colon2^{[k]}\to\mathbb{N}\) as in (A.4). Moreover, by arguing as in the proof of (4.5) before, we show that

$$ \underset{Q}{\mathbb{E}} \bigl[\operatorname{ch}^+_\ell \bigl(Q^{\mathrm{T}}AQ,t\bigr) \bigr] = \sum_{i,j=0}^k |d_{ij}|\cdot\sigma_{k-j}(A)\cdot t^{k-i} . $$
(A.7)

Now assume that A is positive semidefinite. Then each of its principal minors is nonnegative, i.e., \(\operatorname{pm}_{J}(A)\geq0\) for all J⊆[k]. Therefore, if t≥0, we get from (A.4) and (A.6) that

$$ \bigl|\operatorname{ch}_\ell(A,t)\bigr| = \biggl| \sum_{J\subseteq[k]} (-1)^{c_1(J)}\cdot\operatorname {pm}_J(A)\cdot t^{c_2(J)} \biggr| \leq\sum_{J\subseteq[k]} \operatorname{pm}_J(A)\cdot t^{c_2(J)} = \operatorname{ch}^+_\ell(A,t) . $$

Taking into account (A.7) completes the proof of Theorem 4.1. □

Appendix B: Proof of Some Technical Estimations

Recall that the symbol  is used to mark simple estimates, which are easily checked with a computer algebra system.

Proof of Lemma 5.2

(1) We make a case distinction by the parity of . Using Γ(x+1)=xΓ(x), for odd  we obtain

$$ \frac{\varGamma(\frac{m+\ell+1}{2})}{\varGamma(\frac{m}{2})} = \prod_{a=0}^{\frac{\ell-1}{2}} \biggl( \frac{m}{2}+a \biggr) \leq\frac{m}{2}\cdot \biggl(\frac{m+\ell-1}{2} \biggr)^{\frac{\ell-1}{2}} \leq\sqrt{\frac{m}{2}}\cdot \biggl(\frac{m+\ell}{2} \biggr)^{\frac {\ell}{2}} . $$

Using additionally \(\varGamma(x+\frac{1}{2})<\sqrt{x}\cdot\varGamma(x)\), for even  we get

$$ \frac{\varGamma(\frac{m+\ell+1}{2})}{\varGamma(\frac{m}{2})} = \prod_{a=0}^{\frac{\ell}{2}-1} \biggl(\frac{m+1}{2}+a \biggr) \cdot\frac{\varGamma(\frac{m+1}{2})}{\varGamma(\frac{m}{2})} < \biggl( \frac{m+\ell}{2} \biggr)^{\frac{\ell}{2}}\cdot\sqrt {\frac{m}{2}} . $$

(2) As for the second estimate, we distinguish the cases i≥2k and i≤2k. From 0≤km−1 and 0≤iknm−1, we get

$$\begin{aligned} 1 \leq& m-k+i-k \leq n-1 , \\ 1 \leq& n-(m-k+i-k) \leq n-1 . \end{aligned}$$

For i≥2k we thus have

$$ \biggl(\frac{m+i-2k}{n-m-i+2k} \biggr)^{\frac{i-2k}{2}} \leq(n-1)^{\frac{i-2k}{2}} < n^{\frac{i}{2}} , $$

and for i≤2k

$$ \biggl(\frac{m+i-2k}{n-m-i+2k} \biggr)^{\frac{i-2k}{2}} = \biggl(\frac{n-m+2k-i}{m-2k+i} \biggr)^{\frac{2k-i}{2}} \leq(n-1)^{\frac{2k-i}{2}} < n^{\frac{i}{2}} . $$

(3) The I-functions have been estimated in [10, Lemma 2.2] in the following way. Let \(\varepsilon:=\sin(\alpha)=\frac{1}{t}\). For i<n−2,

$$ I_{n,n-2-i}(\alpha) = \int_0^\alpha\cos(\rho)^{n-2-i}\cdot\sin (\rho)^i\,\mathrm{d}\rho \leq\frac{\varepsilon^{i+1}}{i+1} , $$

and for i=n−2, assuming n≥3,

With these estimates we obtain

For \(\varepsilon<n^{-\frac{3}{2}}\) we thus get

Similarly, we derive

$$\sum_{i=0}^{n-2} \binom{n-2}{i} \cdot I_{n,n-2-i}(\alpha) < \varepsilon\cdot \biggl((1+\varepsilon)^{n-2} + \frac{1.4}{\sqrt{n}}\cdot\varepsilon^{n-2} \biggr) . $$

For \(\varepsilon<\frac{1}{m}\) we thus have

which proves the assertion. □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Amelunxen, D., Bürgisser, P. Probabilistic Analysis of the Grassmann Condition Number. Found Comput Math 15, 3–51 (2015). https://doi.org/10.1007/s10208-013-9178-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10208-013-9178-4

Keywords

Mathematics Subject Classification (2010)