Stochastic analysis of the least mean fourth algorithm for non-stationary white Gaussian inputs | Signal, Image and Video Processing Skip to main content
Log in

Stochastic analysis of the least mean fourth algorithm for non-stationary white Gaussian inputs

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

This paper studies the stochastic behavior of the least mean fourth (LMF) algorithm for a system identification framework when the input signal is a non-stationary white Gaussian process. The unknown system is modeled by the standard random-walk model. A theory is developed which is based upon the instantaneous average power and the instantaneous average squared power in the adaptive filter taps. A recursion is derived for the instantaneous mean square deviation of the LMF algorithm. This recursion yields interesting results about the transient and steady-state behaviors of the algorithm with time-varying input power. The theory is supported by Monte Carlo simulations for sinusoidal input power variations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. The independence of \(V(n)\) and \(X(n)\) and the Gaussian assumption for \(X(n)\) imply that \({X}^{T}(n)V(n)\) conditioned by \(V(n)\) is Gaussian. The recursion of the algorithm with the usually used small step size implies small fluctuations in \({V}^{T}(n)V(n)\). Consequently, \({E}[\{{V}^{T}{(n)V(n)}\}^{p}] \approx \{{E}[{V}^{T}{(n)V(n)}]\}^{p}\), \(p=1, 2\) and 3. This approximation is the same used in [10] to evaluate even-order moments of \(V(n)\) for the case of stationary \(X(n)\). As a consequence, the resulting theoretical recursions for the second-moment analysis are the same [10, Section III] as using the Gaussian assumption for \({X}^{T}(n)V(n)\) in this paper.

References

  1. Haykin, S.: Adaptive Filter Theory, 4th edn. Prentice Hall, New Jersey (2002)

  2. Sayed, A.H.: Fundamentals of Adaptive Filtering. Wiley-Inter-Science, New York (2003)

    Google Scholar 

  3. Walach, E., Widrow, B.: The least mean fourth (LMF) adaptive algorithm and its family. IEEE Trans. Inf. Theory IT–30, 275–283 (1984)

    Article  Google Scholar 

  4. Cho, S.H., Kim, S.D., Jen, K.Y.: Statistical convergence of the adaptive least mean fourth algorithm. In: Proceedings of the ICSP’96, pp. 610–613 (1996)

  5. Koike, S.: Stability conditions for adaptive algorithms with non-quadratic error criteria. EUSIPCO-2000, pp. 131–134, Tempere, Finland, September 5–8 (2000)

  6. Al-Naffouri, T., Sayed, A.: Transient analysis of adaptive filters. In: Proceedings of the IEEE International Conference Acoustics, Speech, and Signal Processing (ICASSP), vol. 6, pp. 3869–3872 (2001)

  7. Hübscher, P.I., Bermudez, J.C.M.: An improved statistical analysis of the least mean fourth (lmf) adaptive algorithm. IEEE Trans. Signal Process. 51(3), 664–671 (2003)

    Article  Google Scholar 

  8. Chan, M.K., Cowan, C.F.N.: Using a normalised LMF algorithm for channel equalisation with co-channel interference, EUSIPCO-2002, pp. 48–51, Toulouse, France, September 3–6 (2002)

  9. Nascimento, V.H., Bermudez, J.C.M.: Probability of divergence for the least-mean fourth algorithm. IEEE Trans. Signal Process. 54(4), 1376–1385 (2006)

    Article  Google Scholar 

  10. Hübscher, P.I., Bermudez, J.C.M., Nascimento, V.H.: A mean-square stability of the least mean fourth adaptive algorithm. IEEE Trans. Signal Process. 55(8), 4018–4028 (2007)

    Article  MathSciNet  Google Scholar 

  11. Zerguine, A., Chan, M.K., Al-Naffouri, T.Y., Moinuddin, M., Cowan, C.F.N.: Convergence and tracking analysis of a variable normalised LMF (XE-NLMF) algorithm. Signal Process. 89(5), 778–790 (2009)

    Article  MATH  Google Scholar 

  12. Eweda, E., Zerguine, A.: New insights into the normalization of the least mean fourth algorithm. Signal Image Video Process. doi:10.1007/s11760-011-0231-y (2011)

  13. Eweda, E., Zerguine, A.: A normalized least mean fourth algorithm with improved stability. In: Proceedings of the 44th Asilomar Conference on Signals, Systems and Computers, Pacific Grcve, CA, pp. 1002–1005, November 7–10 (2010)

  14. Bershad, N., Bermudez, J.C.M.: Mean-square stability of the normalized least mean fourth algorithm for white Gaussian inputs. Digit. Signal Process. 21(6), 694–700 (2011)

    Article  MathSciNet  Google Scholar 

  15. Eweda, E.: Global stabilization of the least mean fourth algorithm. IEEE Trans. Signal Process. 60(3), 1473–1477 (2012)

    Article  MathSciNet  Google Scholar 

  16. Eweda, E.: Comparison of LMS and NLMS adaptive filters with a non-stationary input. In: Proceedings of the Asilomar Conference on Signals, Systems and Computers, November 7–10 (2010)

  17. Eweda, E.: Behavior of the least mean square algorithm with a periodically time-varying input power. Int. J. Adapt. Control Signal Process. doi:10.1002/acs.2286 (2012)

  18. Bershad, N.J., Bermudez, J.C.M.: Stochastic analysis of the LMS algorithm for non-stationary white Gaussian inputs. 2011 IEEE Statistical Signal Processing Workshop (SSP), pp. 57–60, Nice, France, 28–30 June (2011)

  19. Michalowicz, J.V., Nichols, J.M., Bucholtz, F., Olson, C.C.: An Isserlis theorem for mixed Gaussian variables: application to the auto-bispectral density. J. Stat. Phys. 136, 89–102 (2009). doi:10.1007/s10955-009-9768-3

    Google Scholar 

  20. Eweda, E., Bershad, N.J.: Stochastic analysis of a stable normalized least mean fourth algorithm for adaptive noise canceling with a white Gaussian reference. IEEE Trans. Signal Process. 60(12), 6235–6244 (2012)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eweda Eweda.

Additional information

This work was partially supported by CNPq under Grants No. 305377/2009-4 and 474735/2012-5.

Appendices

Appendix 1

Justification of (6) and (7)

Justification of (6):

We have

$$\begin{aligned} \hbox {Tr}\left\{ {{R}_{X} ( {n} ){K}_{VV} ( {n} )} \right\} =\sum _{{i=1}}^{N} {{k}_{ i} ( {n} ){r}_{i} ( {n} )} \end{aligned}$$
(45)

where \({k}_{i} ( {n} )\) and \({r}_{i} ( {n} )\) denote the diagonal elements of \({K}_{VV} ( {n} )\) and \({R}_{ X} ( {n} )\), respectively. From (4), \({r}_{i} ( {n} )=\sigma _{X}^2 \left( {{n-i+1}} \right) \).

Now the cross-correlation between two deterministic sequences \({a}_{ i}, {b}_{ i}\) is given by

$$\begin{aligned} {C}_{{a,b}} =\overline{{ab}} -\overline{a} \overline{b} \end{aligned}$$
(46)

where the long-term averages \(\overline{a} , \overline{b} \) and \(\overline{ab}\) are given by

$$\begin{aligned}&\!\!\!\!\overline{a} ={\hbox {lim}}_{{N}\rightarrow \infty } \frac{1}{{N}}\sum _{{ i=1}}^{N} {{a}_{ i} } ,\quad \overline{b} = {\hbox {lim}}_{{N}\rightarrow \infty } \frac{1}{{N}}\sum _{{ i=1}}^{N} {{b}_{ i} }\,\quad \hbox {and}\, \nonumber \\&\!\!\!\!\overline{{ab}} ={\hbox {lim}}_{{N}\rightarrow \infty } \frac{1}{{N}}\sum _{{ i=1}}^{N} {{a}_{i}{b}_{i}}. \end{aligned}$$
(47)

Consider now the hint given by (46) and (47) for the sequences \({k}_{i}(n)\) and \({r}_{ i} ( {n} )\) in (45). For a fixed time n, \({k}_{i}({n})\) is the mean-square value of the \(i\)th component of \(V(n)\), while \({r}_{i} ({n})({n})\) is the variance of the input signal at time \(n-i+1\). Loosely speaking, this implies that \({k}_{ i} ( {n} )\) and \({r}_{ i} ( {n} )\) are independent and thus the covariance \({C}_{{a,b}}\) is equal to zero. Thus, \(\overline{ab} = \overline{a} \overline{b} \), and therefore, for sufficiently large N,

$$\begin{aligned} \frac{1}{{N}}\sum _{{ i=1}}^{N} {{k}_{ i} ( {n} ){r}_{ i} ( {n} )} \approx \frac{1}{{N}}\sum _{{ i=1}}^{N} {{k}_{ i} ( {n} )\frac{1}{{N}}\sum _{{ i=1}}^{N} {{r}_{ i} ( {n} )} }. \end{aligned}$$
(48)

To illustrate (48), consider (48) for two extreme cases of slow and fast variations in the input power. For slow variations in the input power, the individual terms \({r}_{ i} \left( {n} \right) \) in (48) will be almost equal. Thus, (48) follows immediately. For fast variations in the input power, \({k}_{ i} ( {n} )\) will have a weak algebraic dependence upon \({r}_{ i} ( {n} )\). This is because \({K}_{VV} ( {n} )\) is a weighted time average of \({R}_{X} \left( {j} \right) \), \(j <n\). The weak algebraic dependence of \({k}_{ i} ( {n})\) on \({r}_{ i} ( {n} )\) implies the weak coupling of their time averages, which implies (48).

Finally, Eqs. (45), (48) and (8) imply that

$$\begin{aligned} \hbox {Tr}\left[ {{R}_{x} ( {n} ){K}_{VV} \left( {n} \right) } \right] \approx \varphi ({n})\hbox {Tr}\left[ {{K}_{VV} \left( {n} \right) } \right] . \end{aligned}$$
(49)

Justification of (7):

We have

$$\begin{aligned} \hbox {Tr}\left\{ {{R}_{x}^2 {(n)K}_{VV} {(n)}} \right\} =\sum _{{ i}=1}^{N} {{k}_{ i} {(n)r}_{ i}^2 {(n)}}. \end{aligned}$$
(50)

Using the same reasoning above (48), we obtain

$$\begin{aligned} \frac{1}{{N}}\sum _{{ i}=1}^{N} {{k}_{ i} {(n)r}_{ i}^2 {(n)}} \approx \left[ {\frac{1}{{N}}\sum _{{ i}=1}^{N} {{k}_{ i} {(n)}} } \right] \left[ {\frac{1}{{N}}\sum _{{ i}=1}^{N} {{r}_{ i}^2 {(n)}} } \right] . \end{aligned}$$
(51)

Then, (7) follows from (50) and (51).

Appendix 2

Proof of (19).

For convenience, we drop the time-index n in this appendix. We have

$$\begin{aligned} {E}\left[ {\left( {{X}^{T}{V}} \right) ^{6}{X}^{T}{X}} \right] =\sum _{{ i}=1}^{N} {{F}_{ i} } \end{aligned}$$
(52)

where

$$\begin{aligned} {F}_{ i} ={ E}\left[ {\left( {{X}^{T}{V}} \right) ^{6}{x}_{ i} ^{2}} \right] \end{aligned}$$
(53)

with \({x}_{{ i}}\) being the i-th element of the vector X. From Isserlis’ theorem [19], for zero-mean jointly Gaussian random variables \({a}_{1},{a}_{2},{ \ldots ., }{a}_{7}, {a}_{8}\), we have

$$\begin{aligned} {E}\left[ {{a}_1 {a}_2 {a}_3 {a}_4 {a}_5 {a}_6 {a}_7 {a}_8 } \right] =\sum {\prod {{E[a}_{ i} {a}_{j} ]} } \end{aligned}$$
(54)

over all distinct ways of partitioning \({a}_{1},{a}_{2},\ldots ., {a}_{7}, {a}_{8}\) into pairs. In our case, \({a}_{1}= {a}_{2}= {\ldots }.\,\, {a}_{6} = {a} = {X}^{T}{V}, {a}_{7}= {a}_{8}= {x}_{{ i}}\). The total number of terms in the summation in (54) is 105 terms: 15 terms including E[\({a}_{7}{a}_{8}\)] and 90 terms not including E[\({a}_{7}{a}_{8}\)]. Thus,

$$\begin{aligned} {F}_{ i} ={ 15 T}_1 \left( {i} \right) +90{T}_2 \left( {i} \right) \end{aligned}$$
(55)

where

$$\begin{aligned}&\!\!\!\!{T}_1 \left( {i} \right) ={ E}[ {{a}^{2}} ]{ E}[ {{a}^{2}} ]{ E}[ {{a}^{2}} ]{ E}[ {{a}_7 {a}_8} ]\end{aligned}$$
(56)
$$\begin{aligned}&\!\!\!\! {T}_2 \left( {i} \right) ={ E}[ {{a}^{2}} ]{ E}[ {{a}^{2}} ]{ E}[ {{aa}_7 } ]{ E}\left[ {{aa}_7 } \right] . \end{aligned}$$
(57)

From (56),

$$\begin{aligned} {T}_1 \left( {i} \right) =\{{E}[ {( {{X}^{T}{V}} )^{2}} ]\}^{3}{E}[ {{x}_{ i} ^{2}} ]=\{\hbox {Tr}\left( {{K}_{VV} {R}} \right) \}^{3}{E}[ {{x}_{ i} ^{2}} ]. \end{aligned}$$
(58)

Summing (58) over i, we get

$$\begin{aligned} \sum _{{i}=1}^{N} {{T}_1 {(i)}} =\{\hbox {Tr}\left( {{K}_{VV} {R}} \right) \}^{3}\hbox {Tr}\left( {R} \right) \approx {N}\varphi ^{4}({n}){y}^{3}( {n} ) \end{aligned}$$
(59)

where the last equality follows from (6) and the fact that (5) implies that Tr\((R)\) \(\approx {N}\varphi ({n})\). From Isserlis’ theorem [19],

$$\begin{aligned} {E}\left[ {{aaa}_7 {a}_7 } \right] ={ E}\left[ {{aa}} \right] { E}\left[ {{a}_7 {a}_7 } \right] +{ 2 E}\left[ {{aa}_7 }\right] { E}\left[ {{aa}_7 } \right] . \end{aligned}$$
(60)

Summing over \(i\), the LHS of this equation is the same as Eq. (21), while the first term on the RHS is \(E[aa]\hbox {tr}(R)\). Then, (21) and (60) imply that

$$\begin{aligned} 2\psi ({n}){y}\left( {n} \right) \!+\!{ N}\varphi ^{2}({n}){y}\left( {n} \right) \!=\!{ N}\varphi ^{2}({n}){y}\left( {n} \right) \!+\!{ 2}\sum _{{i}={1}}^{N}{{E[aa}_7 {]E[aa}_7 {]}}. \nonumber \\ \end{aligned}$$
(61)

This implies that

$$\begin{aligned} \sum _{{i}={1}}^{N} {{E[aa}_7 {]E[aa}_7 {]}} =\psi ({n}){y(n)}. \end{aligned}$$
(62)

From (57) and (62),

$$\begin{aligned} \sum _{{i}= 1}^{N} {{T}_2 {(i)}}&= \psi ({n}){y}( {n} )\{{E}[{({{X}^{T}{V}} )^{2}} ]\}^{2}\!=\psi ({n}){y}( {n} )\{\hbox {Tr}\left( {{K}_{VV}{R}} \right) \}^{2} \nonumber \\&= \psi ({n})\varphi ^{2}({n}){y}^{3}\left( n\right) . \end{aligned}$$
(63)

Equations (52), (55), (59) and (63) imply (19).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eweda, E., Bershad, N.J. & Bermudez, J.C.M. Stochastic analysis of the least mean fourth algorithm for non-stationary white Gaussian inputs. SIViP 8, 133–142 (2014). https://doi.org/10.1007/s11760-013-0519-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-013-0519-1

Keywords

Navigation