Regularization Based Incremental Learning in TCNN for Robust Speech Enhancement Targeting Effective Human Machine Interaction | SpringerLink
Skip to main content

Regularization Based Incremental Learning in TCNN for Robust Speech Enhancement Targeting Effective Human Machine Interaction

  • Conference paper
  • First Online:
Speech and Computer (SPECOM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14338))

Included in the following conference series:

  • 764 Accesses

Abstract

In general, the performance of deep learning based speech enhancement degrades in presence of unseen noisy environments for any signal-to-noise ratio (SNR) conditions. Although model adaptation techniques may help in improving the performance, they lead to catastrophic forgetting of the previously learned knowledge. Under such conditions, incremental learning or life-long learning has been reported to help in gradually learning the new tasks while maintaining the existing inferred knowledge. In this work, we propose a regularization-based incremental learning strategy for adapting temporal convolutional neural network (TCNN) based speech enhancement novel framework named as RIL-TCN. We investigate the effect of incorporating various weight regularization strategies such as curvature and path regularization on time-domain Scale-Invariant SNR (SI-SNR) loss function associated with TCNN based enhancement framework. We evaluate and compare the performance of our proposed model with the state-of-the-art frequency domain incremental learning model using objective measures such as SI-SNR and PESQ (Perceptual Evaluation of Speech Quality). We show that our proposed approach outperforms on unseen noises from standard CHiME-3 corpus compared to competitive TCNN baseline.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 10295
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 12869
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Perceptual evaluation of speech quality (pesq): An objective method for end-to-end speech quality assessment of narrow-band telephone networks and speech codecs, rec. itu-t p. 86 (2001)

    Google Scholar 

  2. Barker, J., Marxer, R., Vincent, E., Watanabe, S.: The third ‘chime’ speech separation and recognition challenge: dataset, task and baselines. In: 2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 504–511 (2015). https://doi.org/10.1109/ASRU.2015.7404837

  3. Biswas, R., Nathwani, K., Abrol, V.: Transfer learning for speech intelligibility improvement in noisy environments. In: Proceedings of the Interspeech 2021, pp. 176–180 (2021). https://doi.org/10.21437/Interspeech.2021-150

  4. Choy, M.C., Srinivasan, D., Cheu, R.L.: Neural networks for continuous online learning and control. IEEE Trans. Neural Networks 17(6), 1511–1531 (2006). https://doi.org/10.1109/TNN.2006.881710

    Article  Google Scholar 

  5. Cohen, I.: Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging. IEEE Trans. Speech Audio Process. 11(5), 466–475 (2003). https://doi.org/10.1109/TSA.2003.811544

    Article  Google Scholar 

  6. Ephraim, Y., Malah, D.: Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Trans. Acoust. Speech Signal Process. 32(6), 1109–1121 (1984). https://doi.org/10.1109/TASSP.1984.1164453

    Article  Google Scholar 

  7. Fu, S.W., Tsao, Y., Lu, X., Kawai, H.: Raw waveform-based speech enhancement by fully convolutional networks. In: 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), pp. 006–012 (2017). https://doi.org/10.1109/APSIPA.2017.8281993

  8. Gnanamanickam, J., Natarajan, Y., Sri Preethaa, K.R.: A hybrid speech enhancement algorithm for voice assistance application. Sensors (Basel) 21(21), 7025 (2021). https://doi.org/10.3390/s21217025

  9. Goodfellow, I.J., Mirza, M., Da, X., Courville, A.C., Bengio, Y.: An empirical investigation of catastrophic forgeting in gradient-based neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings (2014). https://arxiv.org/abs/1312.6211

  10. Jia, X., Li, D.: TFCN: temporal-frequential convolutional network for single-channel speech enhancement. arXiv (2022)

    Google Scholar 

  11. Kim, D., Han, H., Shin, H.K., Chung, S.W., Kang, H.G.: Phase continuity: Learning derivatives of phase spectrum for speech enhancement. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6942–6946 (2022). https://doi.org/10.1109/ICASSP43922.2022.9746087

  12. Kim, J.H., Yoo, J., Chun, S., Kim, A., Ha, J.W.: Multi-domain processing via hybrid denoising networks for speech enhancement. arXiv (2018)

    Google Scholar 

  13. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings (2015)., https://arxiv.org/abs/1412.6980

  14. Kinoshita, K., Ochiai, T., Delcroix, M., Nakatani, T.: Improving noise robust automatic speech recognition with single-channel time-domain enhancement network. In: ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7009–7013 (2020). https://doi.org/10.1109/ICASSP40776.2020.9053266

  15. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017). https://doi.org/10.1073/pnas.1611835114

    Article  MathSciNet  MATH  Google Scholar 

  16. Kishore, V., Tiwari, N., Paramasivam, P.: improved speech enhancement using TCN with multiple encoder-decoder layers. In: Proceedings of the Interspeech, 2020, pp. 4531–4535 (2020). https://doi.org/10.21437/Interspeech.2020-3122

  17. Lee, C.C.: Seril (2020). https://github.com/ChangLee0903/SERIL

  18. Lee, C.C., Lin, Y.C., Lin, H.T., Wang, H.M., Tsao, Y.: SERIL: noise adaptive speech enhancement using regularization-based incremental learning. In: Proceedings of the Interspeech 2020, pp. 2432–2436 (2020). https://doi.org/10.21437/Interspeech.2020-2213

  19. Lei, P., Chen, M., Wang, J.: Speech enhancement for in-vehicle voice control systems using wavelet analysis and blind source separation. IET Intel. Transport Syst. 13(4), 693–702 (2019)

    Article  Google Scholar 

  20. Li, Y., Chen, F., Sun, Z., Ji, J., Jia, W., Wang, Z.: A smart binaural hearing aid architecture leveraging a smartphone app with deep-learning speech enhancement. IEEE Access 8, 56798–56810 (2020). https://doi.org/10.1109/ACCESS.2020.2982212

    Article  Google Scholar 

  21. Lu, X., Tsao, Y., Matsuda, S., Hori, C.: Speech enhancement based on deep denoising autoencoder. In: Proceedings of the Interspeech 2013, pp. 436–440 (2013). https://doi.org/10.21437/Interspeech.2013-130

  22. Masana, M., Twardowski, B., van de Weijer, J.: On class orderings for incremental learning (2020)

    Google Scholar 

  23. Menéndez, M., Pardo, J., Pardo, L., Pardo, M.: The Jensen-Shannon divergence. J. Franklin Inst. 334(2), 307–318 (1997). https://doi.org/10.1016/S0016-0032(96)00063-4

    Article  MathSciNet  MATH  Google Scholar 

  24. Nair, G.G., Kumar, C.S.: Speech enhancement system for automatic speech recognition in automotive environment. In: 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT), pp. 01–07 (2021). https://doi.org/10.1109/ICCCNT51525.2021.9579986

  25. Paliwal, K., Wójcicki, K., Shannon, B.: The importance of phase in speech enhancement. Speech Commun. 53(4), 465–494 (2011). https://doi.org/10.1016/j.specom.2010.12.003

    Article  Google Scholar 

  26. Panayotov, V., Chen, G., Povey, D., Khudanpur, S.: Librispeech: an ASR corpus based on public domain audio books. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206–5210 (2015). https://doi.org/10.1109/ICASSP.2015.7178964

  27. Pandey, A., Wang, D.: A new framework for supervised speech enhancement in the time domain. In: Proceedings of the Interspeech 2018, pp. 1136–1140 (2018). https://doi.org/10.21437/Interspeech.2018-1223

  28. Pandey, A., Wang, D.: TCNN: temporal convolutional neural network for real-time speech enhancement in the time domain. In: ICASSP 2019–2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6875–6879 (2019). https://doi.org/10.1109/ICASSP.2019.8683634

  29. Park, S.R., Lee, J.W.: A fully convolutional neural network for speech enhancement. In: Proceedings of the Interspeech 2017, pp. 1993–1997 (2017). https://doi.org/10.21437/Interspeech.2017-1465

  30. Qian, K., Zhang, Y., Chang, S., Yang, X., Florêncio, D., Hasegawa-Johnson, M.: Speech enhancement using Bayesian Wavenet. In: Proceedings of the Interspeech 2017, pp. 2013–2017 (2017). https://doi.org/10.21437/Interspeech.2017-1672

  31. Rangachari, S., Loizou, P.C.: A noise-estimation algorithm for highly non-stationary environments. Speech Commun. 48(2), 220–231 (2006). https://doi.org/10.1016/j.specom.2005.08.005

    Article  Google Scholar 

  32. Sivaraman, A., Kim, M.: Efficient personalized speech enhancement through self-supervised learning. IEEE J. Sel. Topics Sig. Process. 16(6), 1342–1356 (2022). https://doi.org/10.1109/JSTSP.2022.3181782

    Article  Google Scholar 

  33. Thiemann, J., Ito, N., Vincent, E.: The diverse environments multi-channel acoustic noise database (DEMAND): a database of multichannel environmental noise recordings. In: Proceedings of Meetings on Acoustics, vol. 19, no. 1, p. 035081 (2013). https://doi.org/10.1121/1.4799597

  34. Wang, Y., Narayanan, A., Wang, D.: On training targets for supervised speech separation. IEEE/ACM Trans. Audio Speech Lang. Process. 22(12), 1849–1858 (2014). https://doi.org/10.1109/TASLP.2014.2352935

    Article  Google Scholar 

  35. Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. p. 3320–3328 (2014)

    Google Scholar 

  36. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: Proceedings of the 34th International Conference on Machine Learning, pp. 3987–3995 (2017)

    Google Scholar 

  37. Zezario, R.E., Fuh, C.S., Wang, H.M., Tsao, Y.: Speech enhancement with zero-shot model selection. In: 2021 29th European Signal Processing Conference (EUSIPCO), pp. 491–495 (2021). https://doi.org/10.23919/EUSIPCO54536.2021.9616163

  38. Zhang, G., Yu, L., Wang, C., Wei, J.: Multi-scale temporal frequency convolutional network with axial attention for speech enhancement. In: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 9122–9126 (2022). https://doi.org/10.1109/ICASSP43922.2022.9746610

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kamini Sabu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sabu, K., Sharma, M., Tiwari, N., Shaik, M. (2023). Regularization Based Incremental Learning in TCNN for Robust Speech Enhancement Targeting Effective Human Machine Interaction. In: Karpov, A., Samudravijaya, K., Deepak, K.T., Hegde, R.M., Agrawal, S.S., Prasanna, S.R.M. (eds) Speech and Computer. SPECOM 2023. Lecture Notes in Computer Science(), vol 14338. Springer, Cham. https://doi.org/10.1007/978-3-031-48309-7_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-48309-7_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-48308-0

  • Online ISBN: 978-3-031-48309-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics