Abstract
We analyze the effects of rounding errors in the nodes on polynomial barycentric interpolation. These errors are particularly relevant for the first barycentric formula with the Chebyshev points of the second kind. Here, we propose a method for reducing them.
Similar content being viewed by others
References
Berrut, J.-P.: Rational functions for guaranteed and experimentally well conditioned global interpolation. Comput. Math. Appl. 15, 1–16 (1988)
Bos, L., De Marchi, S., Hormann, K.: On the Lebesgue constant of Berrut’s rational interpolant at equidistant nodes. J. Comput. Appl. Math. 236(4), 504–510 (2011)
Bos, L., De Marchi, S., Hormann, K., Sidon, J.: Bounding the Lebesgue constant for Berrut’s rational interpolant in general nodes. J. Approx. Theory 169, 7–22 (2013)
Dekker, T.J.: A floating-point technique for extending the available precision. Numer. Math. 18, 224–242 (1971)
Floater, M., Hormann, K.: Barycentric rational interpolation with no poles and high rates of approximation. Numer. Math. 107, 315–331 (2007)
Higham, N.J.: The numerical stability of barycentric Lagrange interpolation. IMA J. Numer. Anal. 24, 547–556 (2004)
Higham, N.: Accuracy and Stability of Numerical Algorithms, 2nd edn. SIAM, Philadelphia (2002)
Mascarenhas, W.F.: The stability of barycentric interpolation at the Chebyshev points of the second kind. Numer. Math. (2014). Available online. doi:10.1007/s00211-014-0612-6
Mascarenhas, W.F., de Camargo, A.: Pierro, On the backward stability of the second barycentric formula for interpolation. Dolomites Res. Notes Approx. 7, 1–12 (2014)
Mascarenhas, W.F., de Camargo, Pierro, A.: The effects of rounding errors on barycentric interpolation (extended version, with complete proofs). arXiv:1309.7970v3 [math.NA], 12 Jan 2016
Powell, M.J.D.: Approximation Theory and Methods. Cambridge University Press, Cambridge (1981)
Salzer, H.: Lagrangian interpolation at the Chebyshev points \(x_{n,\nu } \equiv \cos ({\nu \pi /n}),\nu = 0(1)n\); some unnoted advantages. Comput. J. 15(2), 156–159 (1972)
Taylor, W.J.: Method of Lagrangian curvilinear interpolation. J. Res. Nat. Bur. Stand. 35, 151–155 (1945)
Werner, W.: Polynomial interpolation: Lagrange versus Newton. Math. Comput. 43, 205–207 (1984)
Author information
Authors and Affiliations
Corresponding author
Additional information
Walter F. Mascarenhas, supported by Grant 2013/109162 from Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), and André Pierro de Camargo, supported by Grant 14225012012-0 from CNPq.
Appendices
Appendix 1: Proofs
This appendix proves the main results in the previous sections. We state four more lemmas and prove the main theorems, and the lemmas and the remaining theorems are proved on the online version. The proof of Lemma 3 in this online version also shows that
and the proof of Lemma 4 shows that
Lemma 9
Given a vector \(\mathbf {v} = \left( v_0,v_1 \dots v_n \right) ^t \in {\mathbb {R}}^{n+1},\) define \(s := \sum _{k=0}^n v_k,\) \(\ a := \sum _{k = 0}^n \left| v_k\right| ,\)
If \(s_- < 1\) then
and
Moreover, if \(s_- < 1\) and \(s < 1\) then
Finally, if \(a < 1,\) then, for \(0 \le m \le n,\) the product
satisfies
Lemma 10
For \(1 \le k \le n/2,\) the Chebyshev points of the second kind satisfy
and
In particular, since (65) and (66) decrease with k, for \(1 \le k \le n/2\)
Moreover, if \(n \ge 10\) then, for \(0 \le k \le n,\)
Lemma 11
If \(x^- = x_0,\) \(x^+ = x_n\) and
is such that \(2 \mu \left\| {\hat{\mathbf {x}}} - \mathbf {x} \right\| _\infty < 1,\) then \(\delta _{jk},\) \(\varDelta _k\) in (13)–(15) and \(a_k = \sum _{j \ne k} \delta _{jk}\) satisfy
and, for \(0 \le k < n,\)
for
Lemma 12
Consider nodes \(\mathbf {x} = {\left\{ x_0,\ldots ,n \right\} }\) and interval \([x^-,x^+]\). Given \(0 \le k \le n,\) define \(\mathbf {x}' := {\left\{ x_0,\ldots ,n \right\} } {\setminus } {\left\{ x_k \right\} }\). If \(x \in [x^-,x^+]\) then
1.1 Proofs of the main results
Proof of Theorem 1
Note that, for \(0 \le k \le n\),
and Eq. (18) leads to
for all \(x \in [\hat{x}^-,\hat{x}^+]\). Similarly,
and
Equation (29) follows from (71), (72) and the triangle inequality. Equations (26) and (60) lead to
Equation (31) follows from the bound last two bounds and (29). \(\square \)
Proof of Theorem 2
Given \(\hat{x} \in [\hat{x}^-,\hat{x}^+]\), the arguments used in the proof of Theorem 3.2 in [6] show that the first formula p with rounded nodes \({\hat{\mathbf {x}}}\) satisfies
where the \(\langle {m}\rangle \) are the Stewart’s relative error counters described in [7]. Lemma 2 yields x satisfying (32) and \(\beta _0, \dots , \beta _n\) such that \(\left| \beta _k\right| \le \varDelta / \left( 1 - \varDelta \right) \) and
Combining this equation with (74) we obtain
for \(\tilde{y}_k\) in (33) and Theorem 2 follows from Lemma 3.1 in [7], which states that \(\left| \langle {m}\rangle - 1\right| \le m \varepsilon / \left( 1 - m \varepsilon \right) \) when \(m \varepsilon < 1\). \(\square \)
Proof of Theorem 3
Taking \(\mathbf {x} = \hat{\mathbf {x}}\), and using Eqs. (1), (16) and (75), with \(w_k = {\lambda _k}\left( \hat{\mathbf {x}}\right) \) and \(\beta _k = 0\), and the fact that the first formula with weights \({\lambda }\!\!\left( {\hat{\mathbf {x}}}\right) \) interpolates \(\mathbf {y}\) at the nodes \({\hat{\mathbf {x}}}\), we obtain
for
The definition of \(\langle {3 n + 5}\rangle _k\) in [7] and Lemma 9 show that
and lead to (36) and (35) follows from (18). Finally, (37) follows from (76) and Lemma 12. \(\square \)
Proof of Theorem 4
Let us write \(x := \hat{x}\), consider the function
and \(\tilde{\mathbf {w}} := {\lambda _k}\left( {\hat{\mathbf {x}}}\right) \). By the definition of \(z_k\) in (16), \(\hat{\mathbf {w}} = \tilde{\mathbf {w}} + \tilde{\mathbf {w}}\mathbf {z}\), where \({\tilde{\mathbf {w}}z}\) is the component-wise product of \({\tilde{\mathbf {w}}}\) and \(\mathbf {z}\), i.e., \((\tilde{w}z)_k = \tilde{w}_k \, z_k\). It follows that, for \(\mathbf {u} \in {\mathbb {R}}^{n+1}\),
and we can write the difference \({q}\left( x,{\hat{\mathbf {x}}},\mathbf {y},\hat{\mathbf {w}}\right) - {q}\left( x,{\hat{\mathbf {x}}},\mathbf {y},\tilde{\mathbf {w}}\right) \) in (38) as
for \(N := {g}\left( x,\mathbf {y},\tilde{\mathbf {w}}\right) \), \(\varDelta \! N := {g}\left( x,\mathbf {yz},\tilde{\mathbf {w}}\right) \), \(D := {g}\left( x,\mathbbm {1}_{} ,\tilde{\mathbf {w}}\right) \), and \(\varDelta \! D := {g}\left( x,\mathbf {z},\tilde{\mathbf {w}}\right) \). Since \(\frac{\varDelta \! N}{D} - \frac{N}{D} \frac{\varDelta \! D}{D}\) is equal to the Error Polynomial \( {E}\left( x\, ;{\hat{\mathbf {x}}}, \mathbf {y}, \mathbf {z}\right) \) in (7), we get
The ratio \(\frac{\varDelta \! D}{D}\) is equal to \({P}\left( x, \hat{\mathbf {x}}, \mathbf {z}\right) \). Therefore, the denominator of (77) satisfies
The forward bound (38) follows from (77) and (78) and we are done. \(\square \)
Proof of Theorem 5
Expanding the polynomials \({P}\left( x;{\hat{\mathbf {x}}},\mathbf {yz}\right) \) in (7) in Lagrange’s basis we obtain
This inequality yields (47). In order to prove (48), note that, according to Theorem 16.5 in page 196 of [11] (Jackson’s Theorem), there exists a polynomial \(P^*\) of degree n such that
Using the well known bound
and (79) we deduce the that
and (48) follows from this last bound and (47). \(\square \)
Proof of Theorem 6
The definition of Lagrange polynomial (19) leads to
In [12]’s notation, we have \({\lambda _k}\left( \mathbf {x}^c\right) = 1/{\phi _{n+1}}'\!\left( x_k^c\right) \) and from its equations (5) and (6) we obtain \({\lambda _k}\left( \mathbf {x}^c\right) \le 2^{n-1}/n\). At the top of the second column in page 156 of [12] we learn that \(\left| \prod _{k = 0}^n \left( x - x^c_k \right) \right| \le 2^{1 - n}\) for \(x \in [-1,1]\), and combining this bounds we obtain that
The bounds (57)–(59) and (80) and Lemma 7 lead to
The bounds (57) and (26) show that, in Salzer’s case, for \(A = 0.67667\) and \(B = 1.0236\),
because \(\left( x + \frac{B}{A} \right) \left( x + \frac{B + 1}{A} \right) \le \left( 2.9 + x \right) ^2\) for all \(x \ge 0\). The last two bounds using Theorem 5 yield
and Lemma 3 and Theorem 4 lead to
and we proved (50). Equation (51) follows from this bound, \(\left\| \hat{\mathbf {x}} - \mathbf {x} \right\| _\infty \le 4.6\times 10^{-6}\), \(n \ge 10\) and (31). \(\square \)
Proof of Theorem 7
The hypothesis of this theorem asks for \(n \le 10^{6}\). Therefore, \(2 n \le 2 \times 10^{16}\) and the bounds (44) and (26) for \(2 n + 1\) nodes yield
for \(A = 0.67667\) and \(B = 1.0236\). Theorem 4, (57) and (26) for n nodes lead to an upper bound of
and this leads to (52). Finally, (53) follows from (82), \(\left\| \hat{\mathbf {x}} - \mathbf {x} \right\| _\infty \le 4.6\times 10^{-6}\) and Theorem 1. \(\square \)
Appendix 2: Experimental details
The details regarding the software and hardware used in the experiments are described in the online version. Here we describe the sets of points and bins used in the experiments. The sets \(X_{-1,n}\) and \(X_{0,n}\) in Tables 1, 4 and 5 contain \(10^5\) points each. These points are distributed in 100 intervals \(\left( x_k,x_{k+1} \right) \). \(X_{-1,n}\) uses \(0 \le k < 100\) and \(X_{0,n}\) considers \(n/2 - 100 \le k < n/2\). In each interval \(\left( x_k,x_{k+1} \right) \) we picked the 200 floating point numbers to the right of \(x_k\) and the 200 floating point numbers to the left of \(x_{k+1}\). The remaining 600 points where equally spaced in \(\left( x_k,x_{k+1} \right) \). The step II errors in Tables 1 and 5 were estimated by performing step II in double precision, step III was evaluated with gcc 4.8.1’s __float128 arithmetic, and the result was compared with the interpolated function evaluated with __float128 arithmetic.
The trial points in Fig. 2 were chosen as follows: for each n we considered the relative errors \(z_k^s\) and \(z_k^r\) for the Salzer’s weights and the numerical weights. We then we picked 4 groups of ten indexes: the indexes corresponding largest values \(y_k z_k^s\), the indexes corresponding to the ten smallest values of \(y_k z_k^s\) and the analogous 20 indexes for \(z_k^r\). We then removed the repeated indexes and obtained a vector with \(n_i\) indexes. For each index \(k > 0\) we picked the 2000 floating points to the left of \(\hat{x}_k\) and for each index \(k < n\) we picked the 2000 floating points to the right of \(\hat{x}_k\). We then considered the \(n_i\) intervals of the form \([x_{k-1},x_{k}]\) for \(k = 1, n/n_i, 2 n / n_i \dots \) and picked 2000 equally spaced points in each of these intervals. The first formula was evaluated at these points as in subsection 3.1 of [8].
The \(b_n\) in Table 3 were computed using __float128 arithmetic, from nodes \(\hat{x}_k\) obtained from the formula \(x_k = {\sin }\left( \frac{2k - n}{2n} \pi \right) \) using IEEE-754 double arithmetic. The versions with 3 bins in Tables 4, 5 and 6 are as in Fig. 6. The versions with 39 bins consider the central bin \([-2^{-10},2^{-10}]\), with base \(b_{20} = 0\), the bins \([-1,2^{-10} - 1)\), \([2^{-k} - 1, 2^{1 - k} - 1)\) for \(2 \le k \le 10\) and \([- 2^{-k}, -2^{-k - 1})\) for \(1 \le k < 9\), with base at their left extreme point. The remaining 19 bins and bases were obtained by reflection around 0. The versions with 79 bins consider the central bin \([-2^{-20},2^{-20}]\), with base \(b_{40} = 0\), the bins \([-1,2^{-20} - 1)\), \([2^{-k} -1, 2^{1 - k}-1)\) for \(1 \le k < 20\) and \([-2^{-k}, -2^{-k-1})\) for \(1 \le k < 19\), with base at their left extreme point. The remaining 39 bins and bases were obtained by reflection.
Rights and permissions
About this article
Cite this article
Mascarenhas, W.F., de Camargo, A.P. The effects of rounding errors in the nodes on barycentric interpolation. Numer. Math. 135, 113–141 (2017). https://doi.org/10.1007/s00211-016-0798-x
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-016-0798-x