Abstract
As a deep generative model, the variational autoencoder (VAE) is widely applied to solve problems of insufficient samples and imbalanced labels. In the VAE, the distribution of latent variables affects the quality of the generated samples. To obtain discriminative latent variables and generated samples, a Fisher variational autoencoder (FVAE) based on the Fisher criterion is proposed in this study. The FVAE introduces the Fisher criterion into the VAE by adding the Fisher regularization term to the loss function, aiming to maximize the between-class distance and minimize the within-class distance of latent variables. Different from the unsupervised learning of the VAE, the FVAE requires class labels to calculate the Fisher regularization loss so that the learned latent variables and generated samples have sufficient category information to complete classification tasks. Experiments on benchmark datasets show that the learned latent variables of the FVAE are more discriminative and that the generated samples are more efficient in improving the performance of various classifiers than the VAE, β-variational autoencoder (β-VAE), conditional variational autoencoder (CVAE), Denoising variational autoencoder (DVAE) and information maximizing variational autoencoder (IM-VAE).










Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Jørgensen PB, Schmidt MN, Winther O (2018) Deep generative models for molecular science. Mole Inform 37:1700133
Kingma DP, Welling M (2013) Auto-encoding variational Bayes. arXiv arXiv:1312.6114
Goodfellow I Pouget-Abadie J Mirza M, Xu B (2014) Generative adversarial nets. In proceedings of International Conference on Neural Information Processing Systems, Kuching, Malaysia 2672–2680
Du L, Li L, Guo Y (2021) Two-stream deep fusion network based on VAE and CNN for synthetic aperture radar target recognition. Remote Sens 13:4021
Satheesh C, Kamal S, Mujeeb A, Supriya MH (2021) Passive sonar target classification using deep generative beta-VAE. IEEE Signal Process Lett 28:808–812
Xu X, Li J, Yang Y, Shen FM (2021) Towards effective intrusion detection using log-cosh conditional variational autoencoder. IEEE Internet Things J 8:6187–6196
Guo Y, Ji T, Wang Q, Yu L (2020) Unsupervised anomaly detection in IoT systems for smart cities. IEEE Trans Network Sci Eng 7:2231–2242
Ko T, Kim H (2019) Fault classification in high-dimensional complex processes using semi-supervised deep convolutional generative models. IEEE Trans Industrial Inform 16:2868–2877
Zhang Y, Su X, Meng K, Zhao Y (2020) Robust fault detection approach for wind farms considering missing data tolerance and recovery. IET Renew Power Gen 14:4150–4158
Li Y, Zhang Y, Yu K (2021) Adversarial training with Wasserstein distance for learning cross-lingual word embeddings. Appl Intell 51:7666–7678
Zhang T, Sun X, Li X (2021) Image generation and constrained two-stage feature fusion for person re-identification. Appl Intell 51:7679–7689
Wang X, Tan K, Du Q, Chen Y, Du P (2020) CVA2E: a conditional variational autoencoder with an adversarial training process for hyperspectral imagery classification. IEEE Trans Geosci Remote Sens 58:5676–5692
Sohn K, Yan X, Lee H (2015) Learning structured output representation using deep conditional generative models. In Proceedings of International Conference on Neural Information Processing Systems, Istanbul, Turkey, 9–12. pp. 3483–3491
Louizos C, Swersky K, Li Y, Welling M, Zemel R (2016) The variational fair autoencoder. In Proceedings of International Conference on Learning Representations, San Juan, Puerto Rico, 2–4, pp. 1–11
Zhao S, Song J, Ermon S (2019) InfoVAE: Balancing learning and inference in variational autoencoders. In Proceedings of the AAAI Conference on Artificial Intelligence, Hawaii, USA. 5885–5892
Vahdat A, Kautz J (2020) NVAE: A deep hierarchical variational autoencoder. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, Canada. pp. 1–20
Joo W, Lee W, Park S, Moon IC (2020) Dirichlet variational autoencoder. Pattern Recogn 107:107514
Creswell A, Bharath AA (2018) Denoising adversarial autoencoders. IEEE Trans Neural Networks Learn Syst 30:968–984
Wang X, Liu H (2020) Data supplement for a soft sensor using a new generative model based on a variational autoencoder and Wasserstein GAN. J Process Control 85:91–99
Du C, Chen B, Xu B, Guo DD, Liu HW (2019) Factorized discriminative conditional variational auto-encoder for radar HRRP target recognition. Signal Process 158:176–189
Lu W, Yan X (2020) Deep fisher autoencoder combined with self-organizing map for visual industrial process monitoring. J Manuf Syst 56:241–251
Li Y, Pan Q, Wang S, Cambria E (2019) Disentangled variational auto-encoder for semi-supervised learning. Inf Sci 482:73–85
Kaur T, Saini BS, Gupta S (2018) A novel feature selection method for brain tumor MR image classification based on the fisher criterion and parameter-free bat optimization. Neural Comput Applic 29:193–206
Higgins I, Matthey L, Pal A (2017) Beta-VAE: Learning basic visual concepts with a constrained variational framework. In Proceedings of International Conference on Learning Representations, Toulon, France, 24–26. pp. 1–22
Iwoong ID, Ahn S, Memisevic R (2017) Denoising Criterion for Variational Auto-Encoding Framework[C]//Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, USA. 2059–2065
LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86:2278–2324
Xiao H, Rasul K, Vollgraf R (2017) Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arXiv arXiv:1708.07747
Lake BM, Salakhutdinov R, Tenenbaum JB (2013) One-shot learning by inverting a compositional causal process. In Proceedings of the International Conference on Neural Information Processing Systems, Nevada, USA, 5–10. 2526–2534
Ravuri S, Vinyals O (2019) Classification accuracy score for conditional generative models. In Proceedings of International Conference on Neural Information Processing Systems, Vancouver, Canada.. 12247–12258
Larochelle H, Bengio Y, Louradour J, Lamblin P (2009) Exploring strategies for training deep neural networks. J Mach Learn Res 10:1–40
Kasun LLC, Zhou H, Huang GB (2013) Representational learning with extreme learning machine for big data. IEEE Intell Syst 28:31–34
Sun Y, Xue B, Zhang M (2018) A particle swarm optimization-based flexible convolutional autoencoder for image classification. IEEE Trans Neural Networks Learn Syst 30:2295–2309
Lamata L, Alvarez-Rodriguez U, Martin-Guerrero JD (2018) Quantum autoencoders via quantum adders with genetic algorithms. Quantum Sci Technol 4:014007
Data availability statement
The data used to support the findings of this study are available from the corresponding author upon request.
Funding
The research leading to these results has received funding from the National Natural Science Foundation of China (61876189, 61273275, 61806219 and 61703426) and the Natural Science Basic Research Plan in Shaanxi Province (No. 2021JM—226).
Author information
Authors and Affiliations
Contributions
Conceptualization, J.L. and X.W.; Methodology, J.L.; investigation, J.L. and Q.X.; writing—original draft preparation, J.L.; writing—review and editing, R.L. and Y.S. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflicts of interest.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Lai, J., Wang, X., Xiang, Q. et al. FVAE: a regularized variational autoencoder using the Fisher criterion. Appl Intell 52, 16869–16885 (2022). https://doi.org/10.1007/s10489-022-03422-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-03422-6