Abstract
Transition towards Industry 4.0 relies heavily on manufacturing digitalisation. Digital twin plays a significant role among the pool of relevant technologies as a powerful tool that is expected provide digital access to detailed real-time monitoring of the physical processes and enable significant optimisation due to utilisation of big data acquired from them. Over the past years a significant number of works produced conceptual frameworks of digital twins and discussed their requirements and benefits. The research literature demonstrates application examples and proofs of concepts, although the content is less rich. This paper presents a generative model based on generative adversarial networks (GAN) for machining vibration data, discusses its performance and analyses the drawbacks. The proposed model includes process parameter inputs used to condition the features of generated signals. The control over the generator and a neural network architecture utilising techniques from style-transfer research provide the means to analyse the signal building blocks learned by the model and explore their relationship. The quality of the learned process representation is demonstrated using a dataset obtained from a machining time-domain simulation. The novel results constitute a critical component of a machining digital twin and open new research directions towards development of comprehensive manufacturing digital twins.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein GAN (2017). http://arxiv.org/abs/1701.07875
Brock, A., Donahue, J., Simonyan, K.: Large scale GAN training for high fidelity natural image synthesis (2018). http://arxiv.org/abs/1809.11096
Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: interpretable representation learning by information maximizing generative adversarial nets (2016). http://arxiv.org/abs/1606.03657
Donahue, C., McAuley, J., Puckette, M.: Adversarial audio synthesis (2018). http://arxiv.org/abs/1802.04208
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks (2014). http://arxiv.org/abs/1406.2661
Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of Wasserstein GANs (2017). http://arxiv.org/abs/1704.00028
Henning, K., Wolfgang, W., Johannes, H.: Recommendations for implementing the strategic initiative INDUSTRIE 4.0. Technical Report, April (2013)
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization (2017). http://arxiv.org/abs/1703.06868
Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of GANs for improved quality, stability, and variation (2017). http://arxiv.org/abs/1710.10196
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks (2018). http://arxiv.org/abs/1812.04948
Mirza, M., Osindero, S.: Conditional generative adversarial nets (2014). http://arxiv.org/abs/1411.1784
Niggemann, O., Biswas, G., Kinnebrew, J.S., Khorasgani, H., Volgmann, S., Bunte, A.: Data-driven monitoring of cyber-physical systems leveraging on big data and the internet-of-things for diagnosis and control. CEUR Workshop Proc. 1507, 185–192 (2015)
Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks (2015). http://arxiv.org/abs/1511.06434
Saatchi, Y., Wilson, A.G.: Bayesian GAN (2017). http://arxiv.org/abs/1705.09558
Schmitz, T.L., Smith, K.S.: Machining Dynamics. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-93707-6
Spurr, A., Aksan, E., Hilliges, O.: Guiding InfoGAN with semi-supervision. In: Ceci, M., Hollmén, J., Todorovski, L., Vens, C., Džeroski, S. (eds.) Machine Learning and Knowledge Discovery in Databases, Lecture Notes in Computer Science, pp. 119–134. Springer, Cham (2017)
Tao, F., Cheng, J., Qi, Q., Zhang, M., Zhang, H., Sui, F.: Digital twin-driven product design, manufacturing and service with big data. Int. J. Adv. Manuf. Technol. 94(9–12), 3563–3576 (2018)
Yu, L., Zhang, W., Wang, J., Yu, Y.: SeqGAN: sequence generative adversarial nets with policy gradient (2016). https://doi.org/10.1001/jamainternmed.2016.8245
Zhang, H., Xu, T., Li, H.: StackGAN: text to photo-realistic image synthesis with stacked generative adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), vol. 2017-Octob, pp. 5908–5916. IEEE (2017)
Acknowledgments
Professor Ashutosh Tiwari acknowledges the support of the Royal Academy of Engineering under the Research Chairs and Senior Research Fellowships scheme (RCSRF1718\(\backslash \)5\(\backslash \)41).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zotov, E., Tiwari, A., Kadirkamanathan, V. (2020). Towards a Digital Twin with Generative Adversarial Network Modelling of Machining Vibration. In: Iliadis, L., Angelov, P., Jayne, C., Pimenidis, E. (eds) Proceedings of the 21st EANN (Engineering Applications of Neural Networks) 2020 Conference. EANN 2020. Proceedings of the International Neural Networks Society, vol 2. Springer, Cham. https://doi.org/10.1007/978-3-030-48791-1_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-48791-1_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48790-4
Online ISBN: 978-3-030-48791-1
eBook Packages: Computer ScienceComputer Science (R0)