Abstract
Internet of Things (IoT) is helping to create a smart world by connecting sensors in a seamless fashion. With the forthcoming fifth generation (5G) wireless communication systems, IoT is becoming increasingly important since 5G will be an important enabler for the IoT. Sensor networks for IoT are increasingly used in diverse areas, e.g., in situational and location awareness, leading to proliferation of sensors at the edge of physical world. There exist several variable step-size strategies in literature to improve the performance of diffusion-based Least Mean Square (LMS) algorithm for estimation in wireless sensor networks. However, a major drawback is the complexity in the theoretical analysis of the resultant algorithms. Researchers use several assumptions to find closed-form analytical solutions. This work presents a unified analytical framework for distributed variable step-size LMS algorithms. This analysis is then extended to the case of diffusion based wireless sensor networks for estimating a compressible system and steady state analysis is carried out. The approach is applied to several variable step-size strategies for compressible systems. Theoretical and simulation results are presented and compared with the existing algorithms to show the superiority of proposed work.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Atzori, L., Iera, A., & Morabito, G. (2010). The Internet of Things: A survey. Computer Networks, 54(15), 2787–2805.
Stankovic, J. (2014). Research directions for the Internet of Things. IEEE Internet Things Journal, 1(1), 3–9.
Ejaz, W., & Ibnkahla, M. (2015). Machine-to-machine communications in cognitive cellular systems. In Proceedings of 15th IEEE international conference on ubiquitous wireless broadband (ICUWB), Montreal, Canada (pp. 1–5).
Palattella, M. R., Dohler, M., Grieco, A., Rizzo, G., Torsner, J., Engel, T., et al. (2016). Internet of Things in the 5G era: Enablers, architecture, and business models. IEEE Journal on Selected Areas in Communicatios, 34(3), 510–527.
ul Hasan, N., Ejaz, W., Baig, I., Zghaibeh, M., Anpalagan, A. (2016). QoS-aware channel assignment for IoT-enabled smart building in 5G systems. In Proceedings of 8th IEEE international conference on ubiquitous and future networks (ICUFN), Vienna, Austria (pp. 924–928).
Ejaz, W., Naeem, M., Basharat, M., Anpalagan, A., & Kandeepan, S. (2016). Efficient wireless power transfer in software-defined wireless sensor networks. IEEE Sensors Journal, 16(20), 7409–7420.
Culler, D., Estrin, D., & Srivastava, M. (2004). Overview of sensor networks. IEEE Computer, 37(8), 41–49.
Olfati-Saber, R., Fax, J. A., & Murray, R. M. (2007). Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1), 215–233.
Lopes, C. G., & Sayed, A. H. (2007). Incremental adaptive strategies over distributed networks. IEEE Transactions on Signal Processing, 55(8), 4064–4077.
Lopes, C. G., & Sayed, A. H. (2008). Diffusion least-mean squares over adaptive networks: Formulation and performance analysis. IEEE Transactions on Signal Processing, 56(7), 3122–3136.
Schizas, I. D., Mateos, G., & Giannakis, G. B. (2009). Distributed LMS for consensus-based in-network adaptive processing. IEEE Transactions on Signal Processing, 57(6), 2365–2382.
Cattivelli, F., & Sayed, A. H. (2010). Diffusion LMS strategies for distributed estimation. IEEE Transactions on Signal Processing, 58(3), 1035–1048.
Bin Saeed, M. O., Zerguine, A., Zummo, S. A. (2010). Variable step-size least mean square algorithms over adaptive networks. In Proceedings of 10th international conference on information sciences signal processing and their applications (ISSPA), Kuala Lumpur, Malaysia (pp. 381–384).
Bin Saeed, M. O., Zerguine, A., & Zummo, S. A. (2013). A variable step-size strategy for distributed estimation over adaptive networks. EURASIP Journal on Advances in Signal Processing, 2013(2013), 135.
Almohammedi, A. & Deriche, M. (2015). Variable step-size transform domain ILMS and DLMS algorithms with system identification over adaptive networks. In Proceedings of IEEE Jordan conference on applied electrical engineering and computing technologies (AEECT), Amman, Jordan (pp. 1–6).
Bin Saeed, M. O., Zerguine, A., Sohail, M. S., Rehman, S., Ejaz, W., & Anpalagan, A. (2016). A variable step-size strategy for distributed estimation of compressible systems in wireless sensor networks. In Proceedings of IEEE CAMAD, Toronto, Canada (pp. 1–5).
Bin Saeed, M. O., & Zerguine, A. (2011). A new variable step-size strategy for adaptive networks. In Proceedings of the forty fifth asilomar conference on signals, systems and computers (ASILOMAR), Pacific Grove, CA (pp. 312–315).
Bin Saeed, M. O., Zerguine, A., & Zummo, S. A. (2013). A noise-constrained algorithm for estimation over distributed networks. International Journal of Adaptive Control and Signal Processing, 27(10), 827–845.
Ghazanfari-Rad, S., & Labeau, F. (2014). Optimal variable step-size diffusion LMS algorithms. In Proceedings of IEEE workshop on statistical signal processing (SSP), Gold Coast, VIC (pp. 464–467).
Jung, S. M., Sea, J.-H., & Park, P. G. (2015). A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation. Circuits, Systems, and Signal Process., 34(10), 3291–3304.
Lee, H. S., Kim, S. E., Lee, J. W., & Song, W. J. (2015). A variable step-size diffusion LMS algorithm for distributed estimation. IEEE Transactions on Signal Processing, 63(7), 1808–1820.
Sayed, A. H. (2014). Adaptive networks. Proceedings of the IEEE, 102(4), 460–497.
Sayed, A. H. (2003). Fundamentals of adaptive filtering. New York: Wiley.
Nagumo, J., & Noda, A. (1967). A learning method for system identification. IEEE Transactions on Automatic Control, 12(3), 282–287.
Kwong, R. H., & Johnston, E. W. (1992). A variable step-size LMS algorithm. IEEE Transactions on Signal Processing, 40(7), 1633–1642.
Aboulnasr, T., & Mayyas, K. (1997). A robust variable step size LMS-type algorithm: Analysis and simulations. IEEE Transactions on Signal Processing, 45(3), 631–639.
Costa, M. H., & Bermudez, J. C. M. (2008). A noise resilient variable step-size LMS algorithm. Signal Processing, 88(3), 733–748.
Wei, Y., Gelfand, S. B., & Krogmeier, J. V. (2001). Noise-constrained least mean squares algorithm. IEEE Transactions on Signal Processing, 49(9), 1961–1970.
Sulyman, A. I., & Zerguine, A. (2003). Convergence and steady-state analysis of a variable step-size NLMS algorithm. Signal Processing, 83(6), 1255–1273.
Bin Saeed, M. O., & Zerguine, A. (2013). A variable step-size strategy for sparse system identification. In Proceedings of 10th international multi-conference on systems, signals & devices (SSD), Hammamet (pp. 1–4).
Al-Naffouri, T. Y., & Moinuddin, M. (2010). Exact performance analysis of the \(\epsilon \)-NLMS algorithm for colored circular Gaussian inputs. IEEE Transactions on Signal Processing, 58(10), 5080–5090.
Al-Naffouri, T. Y., Moinuddin, M., & Sohail, M. S. (2011). Mean weight behavior of the NLMS algorithm for correlated Gaussian inputs. IEEE Signal Processing Letters, 18(1), 7–10.
Bin Saeed, M. O. (2017). LMS-based variable step-size algorithms: A unified analysis approach. Arabian Journal of Science & Engineering, 42(7), 2809–2816.
Koning, R. H., Neudecker, H., & Wansbeek, T. (1990). Block Kronecker products and the vecb operator. Economics Deptartment, Institute of Economics Research, Univ. of Groningen, Groningen, The Netherlands, Research Memo. No. 351.
Author information
Authors and Affiliations
Corresponding author
Appendix A
Appendix A
Here, we present the detailed mean-square analysis. Applying the expectation operator to the weighting matrix of (14) gives
For ease of notation, we denote \({\mathbb {E}}\left[ {{\hat{\varvec{\Sigma }}}} \right] = {\varvec{\Sigma }} '\) for the remaining analysis.
Next, using the Gaussian transformed variables as gives in Sect. 3.2, (14) and (24) are rewritten, respectively, as
and
where \({{\bar{\mathbf{Y}}}}(i) = {{\bar{\mathbf{G}} D}}(i) {{{\bar{\mathbf{U}}}}^T(i) }\) and \({\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{U}}}}(i) } \right] = {\varvec{\Lambda }}\).
The two terms that need to be solved are \({\mathbb {E}}\Big [ {\mathbf{v}}^T(i) {{\bar{\mathbf{Y}}}}^T(i) \bar{\varvec{\Sigma }}{{\bar{\mathbf{Y}}}}(i) {\mathbf{v}}(i) \Big ]\) and \({\mathbb {E}}\left[ {{{\bar{\mathbf{U}}}}^T(i) {{\bar{\mathbf{Y}}}}(i) \bar{\varvec{\Sigma }} {{\bar{\mathbf{Y}}}}(i) {{\bar{\mathbf{U}}}}(i) } \right] \). Using the \(\text{ bvec }\{.\}\) operator and the block Kronecker product, denoted by \(\odot \) [34], the two terms are simplified as
and
where \(\bar{\varvec{\sigma }} = \text{ bvec } \left\{ {\bar{\varvec{\Sigma }} } \right\} ,{\mathbf{b}}(i) = \text{ bvec } \left\{ {{\mathbf{R}}_{\mathbf{v}} {\mathbb {E}}\left[ {{\mathbf{D}}^2(i) } \right] {\varvec{\Lambda }} } \right\} ,{\mathbf{R}}_{\mathbf{v}} = {\varvec{\Lambda }} _{\mathbf{v}} \odot \mathrm{{\mathbf{I}}}_M,\varvec{\Lambda }_{\mathbf{v}}\) is a diagonal noise variance matrix for the network and \({\mathbf{A}} = \text{ diag } \left\{ {{\mathbf{A}}_1 ,{\mathbf{A}}_2 ,\ldots ,{\mathbf{A}}_N } \right\} \) [10], with each matrix \({\mathbf{A}}_k\) defined as
where \({\varvec{\Lambda }}_k\) is the diagonal eigenvalue matrix and \(\lambda _k\) is the corresponding eigenvalue vector for node k. Applying the \(\text{ bvec }\{.\}\) operator to (26) and simplifying gives
where \({\mathbf{F}}(i)\) is given by (18). Thus, (14) is rewritten as
which characterizes the transient behavior of the network. Although not explicitly visible from (31), (18) clearly shows the effect of the VSS algorithm on the performance of the algorithm through the presence of the diagonal step-size matrix \({\mathbf{D}} (i)\).
Now, using (31) and (18), the analysis iterates as
where \({{\mathbb {E}}\left[ {\mathbf{D}} (0) \right] = \text{ diag } \left\{ {\mu _{1}(0) {{\mathbf{I}}}_M ,\ldots ,\mu _{N}(0) \mathrm{{\mathbf{I}}}_M } \right\} }\) as these are the initial step-size values. The first iterative update is given by
where \({\mathbf{b}}(0) = \text{ bvec } \left\{ {{\mathbf{R}}_{\mathbf{v}} {\mathbb {E}}\left[ {{\mathbf{D}}^2(0) } \right] {\varvec{\Lambda }} } \right\} ,{\mathbb {E}}\left[ {\mathbf{D}} (1) \right] \) is the first step-size update. The matrix \({\mathbf{F}}(i)\) is updated with (18) using the ith update for the step-size matrix \({\mathbb {E}}\left[ {\mathbf{D}} (i) \right] \), which is updated using the VSS approach that is being applied to the algorithm. The second iterative update is given by
Continuing, the third iterative update is given by
where the weighting matrix \({{\mathcal {A}}}(2) = \mathbf{F}(0)\mathbf{F}(1)\). Similarly, the fourth iterative update is given by
where the weighting matrix \({{\mathcal {A}}}(3) = {{\mathcal {A}}}(2) \mathbf{F}(2)\). Now, from the third and fourth iterative updates, we generalize the recursion for the ith update as
where \({{\mathcal {A}}}(i-1) = {{\mathcal {A}}}(i-2) \mathbf{F}(i-2)\). The recursion for the \((i+1)\)th update is given by
where \({{\mathcal {A}}}(i) = {{\mathcal {A}}}(i-1) \mathbf{F}(i-1)\). Subtracting (32) from (33) and simplifying gives the overall recursive update equation
Simplifying (34) and rearranging the terms gives the final recursive update equation (17), where
The final set of iterative equations for the mean-square learning curve are given by (17), (18), (19) and (20).
Rights and permissions
About this article
Cite this article
Bin Saeed, M.O., Ejaz, W., Rehman, S. et al. A unified analytical framework for distributed variable step size LMS algorithms in sensor networks. Telecommun Syst 69, 447–459 (2018). https://doi.org/10.1007/s11235-018-0447-z
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11235-018-0447-z