High-resolution signal recovery via generalized sampling and functional principal component analysis | Advances in Computational Mathematics
Skip to main content

High-resolution signal recovery via generalized sampling and functional principal component analysis

  • Published:
Advances in Computational Mathematics Aims and scope Submit manuscript

Abstract

In this paper, we introduce a computational framework for recovering a high-resolution approximation of an unknown function from its low-resolution indirect measurements as well as high-resolution training observations by merging the frameworks of generalized sampling and functional principal component analysis. In particular, we increase the signal resolution via a data-driven approach, which models the function of interest as a realization of a random field and leverages a training set of observations generated via the same underlying random process. We study the performance of the resulting estimation procedure and show that high-resolution recovery is indeed possible provided appropriate low rank and angle conditions hold and provided the training set is sufficiently large relative to the desired resolution. Moreover, we show that the size of the training set can be reduced by leveraging sparse representations of the functional principal components. Furthermore, the effectiveness of the proposed reconstruction procedure is illustrated by various numerical examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Adcock, B., Gataric, M., Hansen, A.C.: On stable reconstructions from nonuniform Fourier measurements. SIAM J. Imaging Sci. 7(3), 1690–1723 (2014a)

    Article  MathSciNet  MATH  Google Scholar 

  2. Adcock, B., Gataric, M., Romero, J.L.: Computing reconstructions from nonuniform Fourier samples: Universality of stability barriers and stable sampling rates. Appl. Comput. Harmon. Anal. 46(2), 226–249 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  3. Adcock, B., Hansen, A.C.: A generalized sampling theorem for stable reconstructions in arbitrary bases. J. Fourier Anal. Appl. 18(4), 685–716 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Adcock, B., Hansen, A.C.: Generalized sampling and infinite-dimensional compressed sensing. Found. Comput. Math. 16(5), 1263–1323 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  5. Adcock, B., Hansen, A.C., Kutyniok, G., Ma, J.: Linear stable sampling rate: Optimality of 2D wavelet reconstructions from Fourier measurements. SIAM J. Math. Anal. 47(2), 1196–1233 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  6. Adcock, B., Hansen, A.C., Poon, C.: Beyond consistent reconstructions: Optimality and sharp bounds for generalized sampling, and application to the uniform resampling problem. SIAM J. Math. Anal. 45(5), 3132–3167 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  7. Adcock, B., Hansen, A.C., Poon, C.: On optimal wavelet reconstructions from Fourier samples: Linearity and universality of the stable sampling rate. Appl. Comput. Harmon. Anal. 36(3), 387–415 (2014b)

    Article  MathSciNet  MATH  Google Scholar 

  8. Adcock, B., Hansen, A.C., Poon, C., Roman, B.: Breaking the coherence barrier: A new theory for compressed sensing. Forum of Math. Sigma 5, e4 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  9. Arridge, S., Maass, P., Öktem, O., Schönlieb, C.-B.: Solving inverse problems using data-driven models. Acta Numerica 28, 1–174 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  10. Babuška, I., Osborn, J.: Estimates for the errors in eigenvalue and eigenvector approximation by Galerkin methods, with particular attention to the case of multiple eigenvalues. SIAM J. Numer. Anal. 24(6), 1249–1276 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  11. Baker, S., Kanade, T.: Hallucinating faces. In: Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580), pp 83–88 (2000)

  12. Blu, T., Dragotti, P., Vetterli, M., Marziliano, P., Coulot, L.: Sparse sampling of signal innovations. IEEE Signal Process. Mag. 25(2), 31–40 (2008)

    Article  Google Scholar 

  13. Candès, E.J., Fernandez-Granda, C.: Towards a mathematical theory of super-resolution. Commun. Pure Appl. Math. 67(6), 906–956 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  14. Candės, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 52(2), 489–509 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  15. Capel, D., Zisserman, A.: Super-resolution from multiple views using learnt image models. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, vol. 2 (2001)

  16. Chiew, M., Graedel, N.N., McNab, J.A., Smith, S.M., Miller, K.L.: Accelerating functional MRI using fixed-rank approximations and radial-cartesian sampling. Magn. Reson. Med. 76(6), 1825–1836 (2016)

    Article  Google Scholar 

  17. Cohen, A., Daubechies, I., Vial, P.: Wavelets on the interval and fast wavelet transforms. Appl. Comput. Harmonic Anal. 1(1), 54–81 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  18. Cohen, A., Davenport, M.A., Leviatan, D.: On the stability and accuracy of least squares approximations. FoCM 13(5), 819–834 (2013)

    MathSciNet  MATH  Google Scholar 

  19. d’Aspremont, A., El Ghaoui, L., Jordan, M.I., Lanckriet, G.R.G.: A direct formulation for sparse PCA using semidefinite programming. SIAM Rev. 49(3), 434–448 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  20. Davis, C., Kahan, W.M.: The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal. 7(1), 1–46 (1970)

    Article  MathSciNet  MATH  Google Scholar 

  21. Donoho, D.L.: Compressed sensing. IEEE Trans. Inf. Theory 52 (4), 1289–1306 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Eldar, Y.C.: Sampling with arbitrary sampling andreconstruction spaces and oblique dual frame vectors. J. Fourier Anal. Appl., 77–96 (2003)

  23. Gataric, M., Gordon, G.S.D., Renna, F., Ramos, A.G.C.P., Alcolea, M.P., Bohndiek, S.E.: Reconstruction of optical vector-fields with applications in endoscopic imaging. IEEE Trans. Med. Imaging 38(4), 955–967 (2019)

    Article  Google Scholar 

  24. Gataric, M., Poon, C.: A practical guide to the recovery of wavelet coefficients from Fourier measurements. SIAM J. Sci. Comput. 38(2), A1075–A1099 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  25. Gataric, M., Wang, T., Samworth, R.J.: Sparse principal component analysis via axis-aligned random projections. J. R. Stat. Soc. Ser. B (2020)

  26. Gunturk, B.K., Batur, A.U., Altunbasak, Y., Hayes, M.H., Mersereau, R.M.: Eigenface-domain super-resolution for face recognition. IEEE Trans. Image Process. 12(5), 597–606 (2003)

    Article  Google Scholar 

  27. Hall, P., Muller, H.-G., Wang, J.-L.: Properties of principal component methods for functional and longitudinal data analysis. Ann. Stat. 34 (3), 1493–1517 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  28. Hrycak, T., Gröchenig, K.: Pseudospectral fourier reconstruction with the modified inverse polynomial reconstruction method. J. Comput. Phys. 229(3), 933–946 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  29. Hsu, D., Kakade, S.M., Zhang, T.: Random design analysis of ridge regression. In: Mannor, S., Srebro, N., Williamson, R.C. (eds.) Proceedings of the 25th Annual Conference on Learning Theory. Edinburgh, Scotland, vol. 23, pp 9.1–9.24 (2012a)

  30. Hsu, D., Kakade, S.M., Zhang, T.: A tail inequality for quadratic forms of subgaussian random vectors. Electron. Commun. Probab. 17(52), 6 (2012b)

    MathSciNet  MATH  Google Scholar 

  31. Johnstone, I.M., Lu, A.Y.: On consistency and sparsity for principal components analysis in high dimensions. J. Am. Stat. Assoc. 104(486), 682–693 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  32. Joly, E., Lugosi, G., Oliveira, R.I.: On the estimation of the mean of a random vector. Electron. J. Stat. 11(1), 440–451 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  33. Koltchinskii, V., Lounici, K.: Concentration inequalities and moment bounds for sample covariance operators. Bernoulli 23(1), 110–133 (2017a)

    Article  MathSciNet  MATH  Google Scholar 

  34. Koltchinskii, V., Lounici, K.: New asymptotic results in principal component analysis. Sankhya A 79(2) (2017b)

  35. Lila, E., Arridge, S., Aston, J.A.D.: Representation and reconstruction of covariance operators in linear inverse problems (2019)

  36. Lingala, S.G., Hu, Y., DiBella, E., Jacob, M.: Accelerated dynamic mri exploiting sparsity and low-rank structure: k-t SLR. IEEE Trans. Med. Imaging 30(5), 1042–1054 (2011)

    Article  Google Scholar 

  37. Liu, C., Shum, H. -Y., Freeman, W.T.: Face Hallucination: Theory and Practice. Int. J. Comput. Vis. 75, 115–134 (2007)

    Article  Google Scholar 

  38. Ma, Z.: Sparse principal component analysis and iterative thresholding. Ann. Stat. 41(2), 772–801 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  39. Mallat, S.: A Wavelet Tour of Signal Processing, Third Edition: The Sparse Way, 3rd edn. Academic Press, Inc., Orlando (2008)

    Google Scholar 

  40. Ramsay, J., Silverman, B.W.: Functional Data Analysis. Springer Series in Statistics. Springer, Berlin (2005)

    Book  Google Scholar 

  41. Ravishankar, S., Ye, J.C., Fessler, J.A.: Image reconstruction: From sparsity to data-adaptive methods and machine learning. Proc. IEEE 108, 86–109 (2019)

    Article  Google Scholar 

  42. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(3), 379–423 (1948)

    Article  MathSciNet  MATH  Google Scholar 

  43. Unser, M., Aldroubi, A.: A general sampling theory for nonideal acquisition devices. IEEE Trans. Signal Process. 42(11), 2915–2925 (1994)

    Article  Google Scholar 

  44. Vu, V.Q., Lei, J.: Minimax sparse principal subspace estimation in high dimensions. Ann. Stat. 41(6), 2905–2947 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  45. Wang, T., Berthet, Q., Samworth, R.J.: Statistical and computational trade-offs in estimation of sparse principal components. Ann. Stat. 44 (5), 1896–1930 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  46. Weyl, H.: Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen (mit einer Anwendung auf die Theorie der Hohlraumstrahlung). Math. Ann., 441–479 (1912)

  47. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  48. Zhao, B., Haldar, J.P., Christodoulou, A.G., Liang, Z.: Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans. Med. Imaging 31(9), 1809–1820 (2012)

    Article  Google Scholar 

  49. Zhao, B., Setsompop, K., Adalsteinsson, E., Gagoski, B., Ye, H., Ma, D., Jiang, Y., Ellen Grant, P., Griswold, M.A., Wald, L.L.: Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling. Magn. Reson. Med. 79(2), 933–942 (2018)

    Article  Google Scholar 

  50. Zou, H., Hastie, T., Tibshirani, R.: Sparse principal component analysis. J. Comput. Graph. Stat. 15(2), 265–286 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The author would like to thank Ben Adcock, Clarice Poon, Alberto Gil Ramos, Richard Samworth and Carola-Bibiane Schönlieb for useful discussions and comments.

Funding

The author was supported by an EPSRC grant EP/N014588/1 for the Centre for Mathematical and Statistical Analysis of Multimodal Clinical Imaging.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Milana Gataric.

Ethics declarations

Conflict of interest

The author declares no competing interests.

Additional information

Communicated by: Felix Krahmer

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix. Proofs of theoretical results

Appendix. Proofs of theoretical results

Proof of Lemma 1

Observe that \(\sin \limits \angle (\mathcal {E}_{m},\hat {\mathcal {E}}^{p}_{m})\leq \sin \limits \angle (\mathcal {E}_{m},\mathcal {E}^{p}_{m}) + \sin \limits \angle (\mathcal {E}^{p}_{m},\hat {\mathcal {E}}^{p}_{m}),\) where \(\mathcal {E}^{p}_{m}:=\text {span}\{{\phi _{1}^{p}},\ldots ,{\phi _{m}^{p}}\}\) and \(\{({\phi _{j}^{p}},{\lambda _{j}^{p}})\}_{j=1}^{p}\) are the eigenfunction-eigenvalue pairs of the covariance operator Kp associated to the random variable \(Q_{\mathcal {G}_{p}}F\). Recall that \({e^{p}_{j}}:=(\langle {\phi _{j}^{p}} ,\varphi _{1} \rangle ,\ldots , \langle {\phi _{j}^{p}} ,\varphi _{p} \rangle )^{\top }\) is an eigenvector of ΣX with eigenvalue \({\lambda _{j}^{p}}\). To upper-bound \(\sin \limits \angle (\mathcal {E}_{m},\mathcal {E}^{p}_{m})\), first note that for any k,j = 1,…,p, we have

$$ \begin{array}{@{}rcl@{}} {\lambda_{j}^{p}} \langle {\phi_{j}^{p}} ,\varphi_{k} \rangle &= {\sum}_{\ell=1}^{p} {\Sigma}_{X}^{(k\ell)}\langle {\phi_{j}^{p}},\varphi_{\ell} \rangle = {\int}_{D}{\int}_{D} K(u,v) \overline{\varphi_{k}(u)} {\phi_{j}^{p}}(v) \mathrm{d} u \mathrm{d} v, \end{array} $$

where we used \({\lambda ^{p}_{j}} {e^{p}_{j}}={\Sigma }_{X} {e^{p}_{j}}\) and \( {\Sigma }_{X}^{(k,\ell )}=\mathbb {E} [ \langle F-\mu , \varphi _{k} \rangle \overline { \langle F-\mu , \varphi _{\ell }} \rangle ] = {\sum }_{j\in \mathbb {N}}\lambda _{j} \langle \phi _{j}, \varphi _{k} \rangle \overline {\langle \phi _{j}, \varphi _{\ell } \rangle } = {\int \limits }_{D}{\int \limits }_{D} K(u,v) \overline {\varphi _{k}(u)}\varphi _{\ell }(v) \mathrm {d} u \mathrm {d} v\), k, = 1,…,p, as well as the fact that \({\phi _{j}^{p}}={\sum }_{\ell =1}^{p}\langle {\phi _{j}^{p}},\varphi _{\ell } \rangle \varphi _{\ell }\), respectively. Therefore

$$ \begin{array}{@{}rcl@{}} {\lambda^{p}_{j}} {\int}_{D} {\phi^{p}_{j}}(v) \overline{g(v)} \mathrm{d} v = {\int}_{D}{\int}_{D} K(u,v) {\phi^{p}_{j}}(u) \overline{g(v)} \mathrm{d} u \mathrm{d} v, \quad \forall g\in\mathcal{G}_{p}. \end{array} $$

Since also \(\lambda _{j} {\int \limits }_{D} \phi _{j}(v) \overline {f(v)} \mathrm {d} v = {\int \limits }_{D}{\int \limits }_{D} K(u,v) \phi _{j}(u) \overline {f(v)} \mathrm {d} u \mathrm {d} v\), \(\forall f\in {\mathscr{L}}_{2}(D;\mathbb {C})\), by the approximation properties of the Galerkin method [10], we have \({\lambda _{j}^{p}}\leq \lambda _{j}\), and moreover, there exist C (independent of p) and p0 such that for any pp0 and j = 1,…,m, we have

$$ \begin{array}{@{}rcl@{}} \lambda_{j} -{\lambda_{j}^{p}} &\leq C {\lambda_{j}^{2}} {\epsilon_{p}^{2}}, \end{array} $$
(25)
$$ \begin{array}{@{}rcl@{}} \|\phi_{j} - {\phi_{j}^{p}}\| &\leq C \epsilon_{p}. \end{array} $$
(26)

Now, let ej := (〈ϕj,φ1〉,〈ϕj,φ2〉,…), \(E_{m} := (e_{1}, \ldots , e_{m}) \in \mathbb {C}^{\infty \times m}\) and \({E_{m}^{p}} := ({e_{1}^{p}}, \ldots , {e_{m}^{p}}) \in \mathbb {C}^{p\times m}\). Let \(Q_{p}\in \mathbb {C}^{p\times \infty }\) denote the projection operator with identity Ip constituting the first p columns and the rest equal to zero, and let \({\Theta }({E_{m}^{p}},Q_{p}E_{m})\) denote the m × m diagonal matrix whose j th diagonal entry is \(\arccos \) of the j th singular value of \(({E_{m}^{p}})^{\top } Q_{p} E_{m}\). Observe that \(\langle \phi _{j},{\phi _{k}^{p}} \rangle ={\sum }_{\ell =1}^{p} \langle \phi _{j},\varphi _{\ell } \rangle \langle {\phi _{k}^{p}},\varphi _{\ell } \rangle = ({e_{k}^{p}})^{\top } Q_{p} e_{j}\). Therefore, we have

$$ \begin{array}{@{}rcl@{}} \sin\angle(\mathcal{E}_{m},{\mathcal{E}}^{p}_{m}) &=& \|\sin {\Theta}({E_{m}^{p}},Q_{p} E_{m})\|_{\text{op}} \leq \frac{1}{\sqrt{2}} \| {E_{m}^{p}} - Q_{p} E_{m} \|_{\mathrm{F}} \\ &\leq& \sqrt{\frac{m}{2}} \max_{j=1,\ldots,m} \| {e_{j}^{p}} - Q_{p} e_{j} \|_{2} \!\leq\! \sqrt{\frac{m}{2}} \max_{j=1,\ldots,m} \| {\phi^{p}_{j}} - \phi_{j} \| \!\leq\! C \sqrt{\frac{m}{2}} \epsilon_{p}, \end{array} $$

where in the last inequality we used (26). To conclude part (a) of the proof, it remains to upper-bound \(\sin \limits \angle (\mathcal {E}^{p}_{m},\hat {\mathcal {E}}^{p}_{m})\). Similarly as above, let \(\hat {E}_{m}^{p} := (\hat {e}_{1}^{p}, \ldots , \hat {e}_{m}^{p}) \in \mathbb {C}^{p\times m}\), and let \({\Theta }(\hat {E}_{m}^{p},{E_{m}^{p}})\) denote the m × m diagonal matrix whose j th diagonal entry is \(\arccos \) of the j th singular value of \((\hat {E}_{m}^{p})^{\top } {E_{m}^{p}}\). Since \({\langle \phi ^{p}_{j}},{\hat \phi ^{p}_{k}}\rangle = (\hat {e}^{p}_{k})^{\top }{e^{p}_{j}}\), we have \(\sin \limits \angle ({\mathcal {E}}^{p}_{m},\hat {\mathcal {E}}^{p}_{m}) = \|{\sin \limits } {\Theta }(\hat {E}_{m}^{p},{E_{m}^{p}})\|_{\text {op}}\), and since \({\Sigma }_{X} {e^{p}_{j}}={\lambda ^{p}_{j}} {e^{p}_{j}}\), we have \({\Sigma }_{Y}{e_{j}^{p}}=({\lambda _{j}^{p}}+\tilde \sigma ^{2}){e_{j}^{p}}\). Thus, due to Davis–Kahan Theorem [20] and Weyl’s inequality [46], provided \(\| \hat {\Sigma }_{Y} - {\Sigma }_{Y} \|_{\text {op}}<({\lambda _{m}^{p}}-\lambda _{m+1}^{p})/2\) holds, we have

$$ \sin\angle({\mathcal{E}}^{p}_{m},\hat{\mathcal{E}}^{p}_{m}) \leq \frac{\sqrt{m}}{|{\lambda_{m}^{p}}+\tilde\sigma^{2}-\lambda_{m+1}(\hat{\Sigma}_{Y})|} \| \hat{\Sigma}_{Y} - {\Sigma}_{Y} \|_{\text{op}} \leq \frac{2\sqrt{m}}{{\lambda_{m}^{p}}-\lambda_{m+1}^{p}} \| \hat{\Sigma}_{Y} - {\Sigma}_{Y} \|_{\text{op}}. $$

Due to result by [33], there exists \(\tilde C\) so that for any \(\delta \geq \exp (-n)\), the inequality

$$ \| \hat{\Sigma}_{Y} - {\Sigma}_{Y} \|_{\text{op}} \leq \tilde{C} ({\lambda_{1}^{p}}+\tilde\sigma^{2})(\sqrt{p/n}+\sqrt{\log(1/\delta)/n}) $$

holds with probability at least 1 − δ, and thus, if \(2\tilde {C} ({\lambda _{1}^{p}}+\tilde \sigma ^{2})(\sqrt {p/n}+\sqrt {\log (1/\delta )/n}) <{\lambda _{m}^{p}}-\lambda _{m+1}^{p}\), then

$$ \begin{array}{@{}rcl@{}} \sin\angle({\mathcal{E}}^{p}_{m},\hat{\mathcal{E}}^{p}_{m}) \leq 2 \tilde{C}\sqrt{m}({\lambda_{m}^{p}}-\lambda_{m+1}^{p})^{-1} ({\lambda_{1}^{p}}+\tilde\sigma^{2})(\sqrt{p/n}+\sqrt{\log(1/\delta)/n}) \end{array} $$

holds with probability at least 1 − 2δ. Moreover, due to (25) and the fact that \({\lambda ^{p}_{j}}\leq \lambda _{j}\), we have \({\lambda _{m}^{p}}-\lambda _{m+1}^{p} \geq \lambda _{m}-\lambda _{m+1} - C {\lambda _{m}^{2}} {\epsilon _{p}^{2}}\) and \( {\lambda ^{p}_{1}}+\tilde \sigma ^{2} \leq \lambda _{1}+\tilde \sigma ^{2}\), so the result (a) follows. For part (b), since \(\mu _{p}=Q_{\mathcal {G}_{p}}\mu = (\varphi _{1},\ldots ,\varphi _{p}) \mu _{X}= (\varphi _{1},\ldots ,\varphi _{p})\mu _{Y}\), we have

$$ \begin{array}{@{}rcl@{}} \|\mu -\hat\mu_{p} \| &\leq& \|\mu - \mu_{p} \| + \|\mu_{p} - \hat\mu_{p} \| = \|Q_{\mathcal{G}_{p}^{\perp}}\mu \| + \|\hat{\mu}_{Y}-\mu_{Y}\|_{2} \\ &\leq& \|Q_{\mathcal{G}_{p}^{\perp}}\mu \| + \sqrt{n^{-1}\text{Tr}({\Sigma}_{X}+\tilde\sigma^{2} I_{p})} + \sqrt{2n^{-1}({\lambda_{1}^{p}}+\tilde\sigma^{2})\log(1/\delta^{\prime}) }, \end{array} $$

with probability at least \(1-\delta ^{\prime }\), where in the last inequality we used the result by [32]. The final result then follows by using that \({\lambda ^{p}_{1}}\leq \lambda _{1}\). □

Proof of Theorem 2

First observe that for any \(f\in {\mathscr{L}}_{2}(D;\mathbb {C})\) we have

$$ \begin{array}{@{}rcl@{}} \|Q_{\mathcal{F}_{q}^{\perp}} f\| &\leq& \|Q_{\mathcal{F}_{q}^{\perp}} Q_{\mathcal{E}_{m}} f \| + \|Q_{\mathcal{F}_{q}^{\perp}} (f- Q_{\mathcal{E}_{m}} f ) \| \\&\leq& \|f\| \sup_{\substack{\{g\in\mathcal{E}_{m}: \|g\|=1\}}} \| Q_{\mathcal{F}_{q}^{\perp}} g \| + \|Q_{\mathcal{E}_{m}^{\perp}} f \|. \end{array} $$

Since \(\cos \limits \angle (\hat {\mathcal {E}}^{p}_{m},\mathcal {F}_{q})\geq 1 - \sup _{\substack {\{f\in \hat {\mathcal {E}}^{p}_{m}: \|f\|=1\}}} \|Q_{\mathcal {F}_{q}^{\perp }} f\| \), by using the above inequality we get

$$ \begin{array}{@{}rcl@{}} \cos\angle(\hat{\mathcal{E}}^{p}_{m},\mathcal{F}_{q}) &\geq 1 - \!\sup_{\substack{\{f\in\mathcal{E}_{m}: \|f\|=1\}}}\! \|Q_{\mathcal{F}_{q}^{\perp}} f\| - \!\sup_{\substack{\{f\in\hat{\mathcal{E}}^{p}_{m}: \|f\|=1\}}}\! \|Q_{\mathcal{E}_{m}^{\perp}} f\| \\ &= 1 - \sin\angle(\mathcal{E}_{m},\mathcal{F}_{q}) - \sin\angle(\mathcal{E}_{m},\hat{\mathcal{E}}_{m}^{p}). \end{array} $$
(27)

Define the event \({\Omega }:=\{\sin \limits \angle (\mathcal {E}_{m},\hat {\mathcal {E}}_{m}^{p})\leq \tilde \epsilon _{mpn\delta }\}\), which due to Lemma 1(a) is the event of probability at least 1 − δ. Due to 27 and since \(\sin \limits \angle (\mathcal {E}_{m},\mathcal {F}_{q})<1-\tilde \epsilon _{mpn\delta }\), on Ω we have \( \cos \limits \angle (\hat {\mathcal {E}}^{p}_{m},\mathcal {F}_{q}) > 0. \) Now define \(\tilde {F}_{0}= {\sum }_{j=1}^{m} \tilde \alpha _{j} \hat {\phi _{j}^{p}}\) such that

$$ \{\tilde\alpha_{j}\}_{j=1}^{m} := \text{argmin}_{\{\alpha_{j}\}_{j=1}^{m}\in\mathbb{C}^{m}} \sum\limits_{k=1}^{q} \bigl| \langle F-\hat\mu_{p},\psi_{k} \rangle - \sum\limits_{j=1}^{m} \alpha_{j} {\langle\hat\phi^{p}_{j}},\psi_{k}\rangle \bigr|^{2}. $$
(28)

On Ω, by the GS result (5) and bound (27), we have

$$ \|\tilde{F}_{0} - (F-\hat\mu_{p}) \| \leq \frac{\|Q_{\hat{\mathcal{E}}_{m}^{p}}(F -\hat\mu_{p}) - (F-\hat\mu_{p})\|}{1 - \sin\angle(\mathcal{E}_{m},\mathcal{F}_{q}) - \tilde\epsilon_{mpn\delta}} . $$
(29)

Observe that

$$ \begin{array}{@{}rcl@{}} \|Q_{\hat{\mathcal{E}}_{m}^{p\perp}}(F -\hat\mu_{p}) \| &\leq& \|Q_{\hat{\mathcal{E}}_{m}^{p\perp}}(F -\mu) \| + \|Q_{\hat{\mathcal{E}}_{m}^{p\perp}}(\mu -\hat\mu_{p}) \| \\ &\leq& \|Q_{\hat{\mathcal{E}}_{m}^{p\perp}}Q_{\mathcal{E}_{m}} (F -\mu)\| + \| Q_{\hat{\mathcal{E}}_{m}^{p\perp}} Q_{\mathcal{E}_{m}^{\perp}} (F -\mu) \| + \|\mu - \hat\mu_{p} \| \\ &\leq& \sin\angle(\mathcal{E}_{m},\hat{\mathcal{E}}_{m}^{p})\|F-\mu\| + \| Q_{\mathcal{E}_{m}^{\perp}} (F-\mu) \| + \|\mu - \hat\mu_{p} \|. \end{array} $$
(30)

Define the event \({\Omega }^{\prime }=\{\|\mu -\hat \mu _{p}\|_{2}\leq \bar \epsilon _{np\delta ^{\prime }} \}\), which due to Lemma 1(b) happens with probability \(1-\delta ^{\prime }\). Then, due to 29 and 30, on \({\Omega }\cap {\Omega }^{\prime }\) we have

$$ \|\tilde{F}_{0} - (F-\hat\mu_{p}) \| \leq \frac{ \tilde\epsilon_{mpn\delta}\|F-\mu\| + \| Q_{\mathcal{E}_{m}^{\perp}} (F-\mu) \| +\bar\epsilon_{np\delta^{\prime}} }{1 - \sin\angle(\mathcal{E}_{m},\mathcal{F}_{q}) - \tilde\epsilon_{mpn\delta}}. $$
(31)

Finally, define \({\Omega }^{\prime \prime }:=\bigl \{ \|\hat \alpha - \tilde \alpha \|_{2} \leq \sec \angle (\hat {\mathcal {E}}^{p}_{m},\mathcal {F}_{q}) \sigma \sqrt {(2m+2\log (1/\delta ^{\prime \prime })) /(qr_{1})} \bigr \}\), where vector \(\tilde \alpha =(\tilde \alpha _{1},\ldots ,\tilde \alpha _{m})^{\top }\) is defined as in (26) and \(\hat \alpha =(\hat \alpha _{1},\ldots ,\hat \alpha _{m})^{\top }\) is as in (19). On \({\Omega }\cap {\Omega }^{\prime }\), the probability of \({\Omega }^{\prime \prime }\) conditional on F1,…,Fn,Z1,…,Zn (so that we are in the setting of a fixed design matrix) is at least \(1 - \delta ^{\prime \prime }\), due to the result from [30]. Also, since 31 and

$$ \begin{array}{@{}rcl@{}} \|\hat{F}-F\| \leq \|\hat \alpha - \tilde\alpha\|_{2} + \|\tilde{F}_{0} - (F - \hat\mu_{p}) \|, \end{array} $$

the required bound holds on \({\Omega }\cap {\Omega }^{\prime }\cap {\Omega }^{\prime \prime }\), which has the probability at least \(1 - \delta - \delta ^{\prime } - \delta ^{\prime \prime }\) because \(\mathbb {P}({\Omega } \cap {\Omega }^{\prime } \cap {\Omega }^{\prime \prime }) \geq 1- \mathbb {P}(({\Omega } \cap {\Omega }^{\prime })^{\mathrm {c}}) -\). □

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gataric, M. High-resolution signal recovery via generalized sampling and functional principal component analysis. Adv Comput Math 47, 84 (2021). https://doi.org/10.1007/s10444-021-09908-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10444-021-09908-0

Keywords

Mathematics Subject Classification (2010)