On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps
Abstract
:1. Introduction
RQ: What are the optimal window length and window shift to segment continuous EEG signals that leads to the formation of a latent space in person-specific Convolutional Autoencoder that leads to maximum reconstruction capacity and maximum utility in classification tasks?
2. Related Work
3. Materials and Methods
IF a sliding window technique is used to segment multichannel EEG signals into windows, AND topographic head-maps are formed from each window, which is used to train a Convolutional Autoencoder (ConvAE) for reducing their dimensionalityTHEN there exists an optimal window length (WL) AND window shift (WS) combination that leads to the formation of a minimal latent space (LS) for ConvAE that maximizes the mean reconstruction capacity of the input topographic maps, AND that has maximal mean utility in a classification task.
3.1. DEAP Dataset
3.2. Data Pre-Processing
3.3. Convolutional Autoencoders (ConvAE)
3.4. Reconstruction Evaluation Metrics
3.5. Classification
3.6. ConvAE and DNN Hyperparameter Tuning
- Three convolutional layers, each in both encoder and decoder, lead to the best reconstruction capacity of the ConvAE. No significant improvement in model performance was observed for more than three convolutional layers. Thus the network was not expanded. The Learning Rate (LR) scheduler gave 3 × 10 for the Adam optimizer. The optimal batch size was found to be 32. For encoders, performance was optimal when the number of kernels was doubled at each convolutional layer while image dimensions were halved. Symmetrically, the number of kernels was halved for decoders while dimensions doubled until the output layer, where the image size equaled the original input.
- For DNN, five dense layers gave the optimal performance. The LR scheduler gave 3 × 10 for Adam optimizer. The optimal batch size was found to be 32. The Kullback–Leibler (KL) divergence outperformed other loss metrics including categorical cross-entropy for multiclass classification with softmax activation in the output layer with the one-hot encoded target variable (video ID). KL divergence is given by:
- As mentioned previously, more data were generated by overlapping EEG signal windows using a window shift of 125 ms, 250 ms, and 500 ms.
- L2 regularization was added to the convolutional layers of Autoencoder with the regularization factor tuned to 0.01.
- Dropout regularization was introduced in DNN with a rate of 0.1.
- Early stopping monitored training and validation epochs in both ConvAE and DNN. Model training was stopped when no significant decrease was found in the validation loss over 10 epochs.
3.7. Implementation Details
3.8. Statistics
4. Results
4.1. Convolutional Autoencoders (ConvAE)
4.2. Dense Neural Network (DNN)
4.3. Statistical Inferences
5. Discussion
- (I)
- the larger the latent space, the higher the reconstruction ability,
- (II)
- the smaller the window shift, the higher the reconstruction ability,
- (III)
- window length did not have an important role, and it did not influence the reconstruction ability,
- (IV)
- on average, the utility of all the latent spaces learned in each ConvAE outperformed that associated with the original topographic maps;
- (V)
- the best utility of the latent space is when the input is of shape (8,8,5) with window shift 125 ms and with a window length of at least 1 s.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
EEG | Electroencephalogram |
CNN | Convolution Neural Network |
PCA | Principal Component Analysis |
ICA | Independent Component Analysis |
WL | Window Length |
LS | Latent Space |
WS | Window Shift |
TPHM | Topology Preserved Head Maps |
ConvAE | Convolutional Autoencoder |
SSIM | Structural Similarity Index Measure |
MSE | Mean Square Error |
NRMSE | Normalized Root-Mean-Square Error |
PSNR | Peak Signal-to-Noise Ratio |
DNN | Dense Neural Network |
LDA | Linear Discriminant Analysis |
CSP | Common Spatial Pattern |
KNN | K-Nearest Neighbors |
SVM | Support Vector Machine |
RF | Random Forest |
GMM | Gaussian Mixture Model |
DEAP | A dataset for emotion analysis using eeg, physiological and video signals |
EOG | Electrooculogram |
FFT | Fast Fourier Transform |
References
- Mars, R.B.; Sotiropoulos, S.N.; Passingham, R.E.; Sallet, J.; Verhagen, L.; Khrapitchev, A.A.; Sibson, N.; Jbabdi, S. Whole brain comparative anatomy using connectivity blueprints. eLife 2018, 7, e35237. [Google Scholar] [CrossRef] [PubMed]
- Cohen, M.X. Analyzing Neural Time Series Data: Theory and Practice; MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Alçin, Ö.F.; Siuly, S.; Bajaj, V.; Guo, Y.; Şengu, A.; Zhang, Y. Multi-Category EEG Signal Classification Developing Time-Frequency Texture Features Based Fisher Vector Encoding Method. Neurocomputing 2016, 218, 251–258. [Google Scholar] [CrossRef]
- Stober, S.; Sternin, A.; Owen, A.M.; Grahn, J.A. Deep Feature Learning for EEG Recordings. arXiv 2015, arXiv:1511.04306. [Google Scholar]
- Férat, V.; Seeber, M.; Michel, C.M.; Ros, T. Beyond broadband: Towards a spectral decomposition of electroencephalography microstates. Hum. Brain Mapp. 2022, 43, 3047–3061. [Google Scholar] [CrossRef]
- Abdeljaber, O.; Avcı, O.; Kiranyaz, M.S.; Boashash, B.; Sodano, H.A.; Inman, D.J. 1-D CNNs for structural damage detection: Verification on a structural health monitoring benchmark data. Neurocomputing 2018, 275, 1308–1317. [Google Scholar] [CrossRef]
- Abdi, H.; Williams, L.J. Principal component analysis. Wires Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Bro, R.; Smilde, A.K. Principal component analysis. Anal. Methods 2014, 6, 2812–2831. [Google Scholar] [CrossRef] [Green Version]
- Acharya, U.R.; Sree, S.V.; Swapna, G.; Martis, R.J.; Suri, J.S. Automated EEG analysis of epilepsy: A review. Knowl. Based Syst. 2013, 45, 147–165. [Google Scholar] [CrossRef]
- Oosugi, N.; Kitajo, K.; Hasegawa, N.; Nagasaka, Y.; Okanoya, K.; Fujii, N. A New Method for Quantifying the Performance of EEG Blind Source Separation Algorithms by Referencing a Simultaneously Recorded ECoG Signal. Neural Netw. 2017, 93, 1–6. [Google Scholar] [CrossRef]
- Korats, G.; Cam, S.L.; Ranta, R.; Hamid, M.R. Applying ICA in EEG: Choice of the Window Length and of the Decorrelation Method. In Proceedings of the International Joint Conference on Biomedical Engineering Systems and Technologies—BIOSTEC, Vilamoura, Portugal, 1–4 February 2012. [Google Scholar]
- Brunner, C.; Naeem, M.; Leeb, R.; Graimann, B.; Pfurtscheller, G. Spatial Filtering and Selection of Optimized Components in Four Class Motor Imagery EEG Data Using Independent Components Analysis. Pattern Recogn. Lett. 2007, 28, 957–964. [Google Scholar] [CrossRef]
- Xing, X.; Li, Z.; Xu, T.; Shu, L.; Hu, B.; Xu, X. SAE+LSTM: A New Framework for Emotion Recognition From Multi-Channel EEG. Front. Neurorobot. 2019, 13, 37. [Google Scholar] [CrossRef] [PubMed]
- Zhang, S.; You, B.; Lang, X.; Zhou, Y.; An, F.; Dai, Y.; Liu, Y. Efficient Rejection of Artifacts for Short-Term Few-Channel EEG Based on Fast Adaptive Multidimensional Sub-Bands Blind Source Separation. IEEE Trans. Instrum. Meas. 2021, 70, 1–16. [Google Scholar] [CrossRef]
- Hsu, S.H.; Mullen, T.; Jung, T.P.; Cauwenberghs, G. Real-Time Adaptive EEG Source Separation Using Online Recursive Independent Component Analysis. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 24, 1. [Google Scholar] [CrossRef] [PubMed]
- You, S.D.; Li, Y.C. Predicting Viewer’s Preference for Music Videos Using EEG Dataset. In Proceedings of the 2020 IEEE International Conference on Consumer Electronics—Asia (ICCE-Asia), Seoul, Republic of Korea, 1–3 November 2020; pp. 1–2. [Google Scholar] [CrossRef]
- Arabshahi, R.; Rouhani, M. A convolutional neural network and stacked autoencoders approach for motor imagery based brain-computer interface. In Proceedings of the 2020 10th International Conference on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 29–30 October 2020; pp. 295–300. [Google Scholar] [CrossRef]
- Zhang, P.; Wang, X.; Zhang, W.; Chen, J. Learning Spatial–Spectral–Temporal EEG Features with Recurrent 3D Convolutional Neural Networks for Cross-Task Mental Workload Assessment. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 31–42. [Google Scholar] [CrossRef] [PubMed]
- Yao, Y.; Plested, J.; Gedeon, T. Deep Feature Learning and Visualization for EEG Recording Using Autoencoders. In Proceedings of the 25th International Conference, ICONIP 2018, Siem Reap, Cambodia, 13–16 December 2018. Proceedings, Part VII. [Google Scholar]
- Gaur, P.; Gupta, H.; Chowdhury, A.; McCreadie, K.; Pachori, R.B.; Wang, H. A sliding window common spatial pattern for enhancing motor imagery classification in EEG-BCI. IEEE Trans. Instrum. Meas. 2021, 70, 1–9. [Google Scholar] [CrossRef]
- Wilaiprasitporn, T.; Ditthapron, A.; Matchaparn, K.; Tongbuasirilai, T.; Banluesombatkul, N.; Chuangsuwanich, E. Affective EEG-based person identification using the deep learning approach. IEEE Trans. Cogn. Dev. Syst. 2019, 12, 486–496. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Wang, X.; Liu, W.; Chang, Z.; Kärkkäinen, T.J.; Cong, F. One dimensional convolutional neural networks for seizure onset detection using long-term scalp and intracranial EEG. Neurocomputing 2021, 459, 212–222. [Google Scholar] [CrossRef]
- Huang, L.; Zhao, Y.; Zeng, Y.; Lin, Z. BHCR: RSVP target retrieval BCI framework coupling with CNN by a Bayesian method. Neurocomputing 2017, 238, 255–268. [Google Scholar] [CrossRef]
- Qiu, Z.; Jin, J.; Lam, H.K.; Zhang, Y.; Wang, X.; Cichocki, A. Improved SFFS method for channel selection in motor imagery based BCI. Neurocomputing 2016, 207, 519–527. [Google Scholar] [CrossRef]
- Sadatnejad, K.; Ghidary, S.S. Kernel learning over the manifold of symmetric positive definite matrices for dimensionality reduction in a BCI application. Neurocomputing 2016, 179, 152–160. [Google Scholar] [CrossRef]
- Fei, Z.; Yang, E.; Li, D.D.U.; Butler, S.; Ijomah, W.; Li, X.; Zhou, H. Deep convolution network based emotion analysis towards mental health care. Neurocomputing 2020, 388, 212–227. [Google Scholar] [CrossRef]
- Kurup, A.R.; Ajith, M.; Ramón, M.M. Semi-supervised facial expression recognition using reduced spatial features and Deep Belief Networks. Neurocomputing 2019, 367, 188–197. [Google Scholar] [CrossRef]
- Xin Zhang, Y.; Chen, Y.; Gao, C. Deep unsupervised multi-modal fusion network for detecting driver distraction. Neurocomputing 2021, 421, 26–38. [Google Scholar] [CrossRef]
- Yin, Z.; Zhao, M.; Zhang, W.; Wang, Y.; Wang, Y.; Zhang, J. Physiological-signal-based mental workload estimation via transfer dynamical autoencoders in a deep learning framework. Neurocomputing 2019, 347, 212–229. [Google Scholar] [CrossRef]
- Ieracitano, C.; Mammone, N.; Bramanti, A.; Hussain, A.; Morabito, F.C. A Convolutional Neural Network approach for classification of dementia stages based on 2D-spectral representation of EEG recordings. Neurocomputing 2019, 323, 96–107. [Google Scholar] [CrossRef]
- Su, R.; Liu, T.; Sun, C.; Jin, Q.; Jennane, R.; Wei, L. Fusing convolutional neural network features with hand-crafted features for osteoporosis diagnoses. Neurocomputing 2020, 385, 300–309. [Google Scholar] [CrossRef]
- Chambon, S.; Galtier, M.N.; Arnal, P.J.; Wainrib, G.; Gramfort, A. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 758–769. [Google Scholar] [CrossRef] [Green Version]
- Lee, S.B.; Kim, H.J.; Kim, H.; Jeong, J.H.; Lee, S.W.; Kim, D.J. Comparative analysis of features extracted from EEG spatial, spectral and temporal domains for binary and multiclass motor imagery classification. Inf. Sci. 2019, 502, 190–200. [Google Scholar] [CrossRef]
- Li, Z.; Wang, J.; Jia, Z.; Lin, Y. Learning Space-Time-Frequency Representation with Two-Stream Attention Based 3D Network for Motor Imagery Classification. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17–20 November 2020; pp. 1124–1129. [Google Scholar] [CrossRef]
- Ang, K.K.; Chin, Z.Y.; Wang, C.; Guan, C.; Zhang, H. Filter Bank Common Spatial Pattern Algorithm on BCI Competition IV Datasets 2a and 2b. Front. Neurosci. 2012, 6. [Google Scholar] [CrossRef] [Green Version]
- Subasi, A.; Gursoy, M.I. EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Syst. Appl. 2010, 37, 8659–8666. [Google Scholar] [CrossRef]
- Jirayucharoensak, S.; Pan-Ngum, S.; Israsena, P. EEG-Based Emotion Recognition Using Deep Learning Network with Principal Component Based Covariate Shift Adaptation. Sci. World J. 2014, 2014, 627892. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Viola, F.C.; Debener, S.; Thorne, J.; Schneider, T.R. Using ICA for the analysis of multi-channel EEG data. In Simultaneous EEG and fMRI: Recording, Analysis, and Application: Recording, Analysis, and Application; Oxford Academic: New York, NY, USA, 2010; pp. 121–133. [Google Scholar]
- Lemm, S.; Blankertz, B.; Curio, G.; Muller, K.R. Spatio-spectral filters for improving the classification of single trial EEG. IEEE Trans. Biomed. Eng. 2005, 52, 1541–1548. [Google Scholar] [CrossRef] [PubMed]
- Wu, W.; Chen, Z.; Gao, X.; Li, Y.; Brown, E.N.; Gao, S. Probabilistic common spatial patterns for multichannel EEG analysis. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 37, 639–653. [Google Scholar] [CrossRef] [PubMed]
- Qi, Y.; Luo, F.; Zhang, W.; Wang, Y.; Chang, J.; Woodward, D.; Chen, A.; Han, J. Sliding-window technique for the analysis of cerebral evoked potentials. Health Sci. 2003, 35, 231–235. [Google Scholar]
- Alickovic, E.; Kevric, J.; Subasi, A. Performance evaluation of empirical mode decomposition, discrete wavelet transform, and wavelet packed decomposition for automated epileptic seizure detection and prediction. Biomed. Signal Process. Control. 2018, 39, 94–102. [Google Scholar] [CrossRef]
- Atkinson, J.; Campos, D. Improving BCI-based emotion recognition by combining EEG feature selection and kernel classifiers. Expert Syst. Appl. 2016, 47, 35–41. [Google Scholar] [CrossRef]
- Edelman, B.; Baxter, B.; He, B. EEG source imaging enhances the decoding of complex right-hand motor imagery tasks. Ire Trans. Med. Electron. 2016, 63, 4–14. [Google Scholar] [CrossRef]
- Faust, O.; Acharya, U.R.; Adeli, H.; Adeli, A. Wavelet-based EEG processing for computer-aided seizure detection and epilepsy diagnosis. Seizure 2015, 26, 56–64. [Google Scholar] [CrossRef] [Green Version]
- Katsigiannis, S.; Ramzan, N. DREAMER: A Database for Emotion Recognition Through EEG and ECG Signals From Wireless Low-cost Off-the-Shelf Devices. IEEE J. Biomed. Health Inform. 2018, 22, 98–107. [Google Scholar] [CrossRef]
- Dargan, S.; Kumar, M.; Ayyagari, M.R.; Kumar, G. A survey of deep learning and its applications: A new paradigm to machine learning. Arch. Comput. Methods Eng. 2020, 27, 1071–1092. [Google Scholar] [CrossRef]
- Zhang, L.; Tan, J.; Han, D.; Zhu, H. From machine learning to deep learning: Progress in machine intelligence for rational drug discovery. Drug Discov. Today 2017, 22, 1680–1685. [Google Scholar] [CrossRef] [PubMed]
- Bank, D.; Koenigstein, N.; Giryes, R. Autoencoders. arXiv 2020, arXiv:2003.05991. [Google Scholar]
- Li, J.; Struzik, Z.R.; Zhang, L.; Cichocki, A. Feature learning from incomplete EEG with denoising autoencoder. arXiv 2015, arXiv:1410.0818. [Google Scholar] [CrossRef] [Green Version]
- Koelstra, S.; Muhl, C.; Soleymani, M.; Jong-Seok, L.; Yazdani, A.; Ebrahimi, T.; Pun, T.; Nijholt, A.; Patras, I. DEAP: A Database for Emotion Analysis; Using Physiological Signals. IEEE Trans. Affect. Comput. 2012, 3, 18–31. [Google Scholar] [CrossRef] [Green Version]
- Ng, A. Sparse autoencoder. CS294A Lect. Notes 2011, 72, 1–19. [Google Scholar]
- Ahlawat, S.; Choudhary, A.; Nayyar, A.; Singh, S.; Yoon, B. Improved handwritten digit recognition using convolutional neural networks (CNN). Sensors 2020, 20, 3344. [Google Scholar] [CrossRef]
- Sara, U.; Akter, M.; Uddin, M.S. Image quality assessment through FSIM, SSIM, MSE and PSNR—A comparative study. J. Comput. Commun. 2019, 7, 8–18. [Google Scholar] [CrossRef] [Green Version]
- Daoud, H.; Bayoumi, M. Deep Learning Approach for Epileptic Focus Localization. IEEE Trans. Biomed. Circuits Syst. 2020, 14, 209–220. [Google Scholar] [CrossRef]
- Abdelhameed, A.M.; Daoud, H.G.; Bayoumi, M. Epileptic Seizure Detection using Deep Convolutional Autoencoder. In Proceedings of the 2018 IEEE International Workshop on Signal Processing Systems (SiPS), Cape Town, South Africa, 21–24 October 2018; pp. 223–228. [Google Scholar] [CrossRef]
- Hussain, Z.; Gimenez, F.; Yi, D.; Rubin, D. Differential data augmentation techniques for medical imaging classification tasks. In Proceedings of the AMIA Annual Symposium Proceedings, American Medical Informatics Association, Washington, DC, USA, 4 November 2017; Volume 2017, p. 979. [Google Scholar]
- Ahmed, T.; Longo, L. Examining the Size of the Latent Space of Convolutional Variational Autoencoders Trained With Spectral Topographic Maps of EEG Frequency Bands. IEEE Access 2022, 10, 107575–107586. [Google Scholar] [CrossRef]
- Vilone, G.; Longo, L. Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf. Fusion 2021, 76, 89–106. [Google Scholar] [CrossRef]
- Vilone, G.; Longo, L. Classification of Explainable Artificial Intelligence Methods through Their Output Formats. Mach. Learn. Knowl. Extr. 2021, 3, 32. [Google Scholar] [CrossRef]
Window Length (s) | Window Shift (ms) | Amount (in 1 Video) | Amount (Total) |
---|---|---|---|
0.5 | 125 | 477 | 19,080 |
0.5 | 250 | 239 | 9560 |
0.5 | 500 | 120 | 4800 |
1.0 | 125 | 473 | 18,920 |
1.0 | 250 | 237 | 9480 |
1.0 | 500 | 119 | 4760 |
1.5 | 125 | 469 | 18,760 |
1.5 | 250 | 235 | 9400 |
1.5 | 500 | 118 | 4720 |
2.0 | 125 | 465 | 18,600 |
2.0 | 250 | 233 | 9320 |
2.0 | 500 | 117 | 4680 |
Model Configuration | Reconstruction Metric Scores | |||||
---|---|---|---|---|---|---|
WL (s) | WS (ms) | LS | SSIM | MSE | NRMSE | PSNR |
0.5 | 125 | (2,2,5) | 0.9985 | 0.0000069 | 0.0361 | 55.75 |
0.5 | 125 | (4,4,5) | 0.9992 | 0.0000032 | 0.0279 | 58.14 |
0.5 | 125 | (8,8,5) | 0.9996 | 0.0000015 | 0.0206 | 60.78 |
0.5 | 125 | (16,16,5) | 0.9997 | 0.0000009 | 0.0158 | 63.32 |
0.5 | 250 | (2,2,5) | 0.9980 | 0.0000095 | 0.0461 | 53.56 |
0.5 | 250 | (4,4,5) | 0.9990 | 0.0000044 | 0.0345 | 56.27 |
0.5 | 250 | (8,8,5) | 0.9991 | 0.0000039 | 0.0312 | 57.48 |
0.5 | 250 | (16,16,5) | 0.9994 | 0.0000017 | 0.0220 | 60.60 |
0.5 | 500 | (2,2,5) | 0.9975 | 0.0000115 | 0.0510 | 52.89 |
0.5 | 500 | (4,4,5) | 0.9987 | 0.0000049 | 0.0369 | 55.42 |
0.5 | 500 | (8,8,5) | 0.9992 | 0.0000034 | 0.0303 | 57.25 |
0.5 | 500 | (16,16,5) | 0.9993 | 0.0000028 | 0.0257 | 59.20 |
1 | 125 | (2,2,5) | 0.9980 | 0.0000104 | 0.0384 | 54.27 |
1 | 125 | (4,4,5) | 0.9993 | 0.0000033 | 0.0251 | 57.98 |
1 | 125 | (8,8,5) | 0.9997 | 0.0000013 | 0.0169 | 61.23 |
1 | 125 | (16,16,5) | 0.9997 | 0.0000016 | 0.0182 | 61.39 |
1 | 250 | (2,2,5) | 0.9976 | 0.0000129 | 0.0442 | 52.97 |
1 | 250 | (4,4,5) | 0.9986 | 0.0000064 | 0.0360 | 55.35 |
1 | 250 | (8,8,5) | 0.9993 | 0.0000042 | 0.0276 | 57.17 |
1 | 250 | (16,16,5) | 0.9996 | 0.0000022 | 0.0211 | 59.36 |
1 | 500 | (2,2,5) | 0.9971 | 0.0000152 | 0.0514 | 51.74 |
1 | 500 | (4,4,5) | 0.9985 | 0.0000075 | 0.0391 | 53.92 |
1 | 500 | (8,8,5) | 0.9991 | 0.0000043 | 0.0308 | 56.31 |
1 | 500 | (16,16,5) | 0.9989 | 0.0000067 | 0.0326 | 56.47 |
1.5 | 125 | (2,2,5) | 0.9978 | 0.0000128 | 0.0385 | 53.36 |
1.5 | 125 | (4,4,5) | 0.9992 | 0.0000035 | 0.0241 | 57.09 |
1.5 | 125 | (8,8,5) | 0.9996 | 0.0000017 | 0.0169 | 60.48 |
1.5 | 125 | (16,16,5) | 0.9997 | 0.0000013 | 0.0149 | 61.67 |
1.5 | 250 | (2,2,5) | 0.9972 | 0.0000163 | 0.0455 | 52.01 |
1.5 | 250 | (4,4,5) | 0.9987 | 0.0000065 | 0.0335 | 54.38 |
1.5 | 250 | (8,8,5) | 0.9994 | 0.0000032 | 0.0228 | 57.99 |
1.5 | 250 | (16,16,5) | 0.9995 | 0.0000022 | 0.0200 | 59.28 |
1.5 | 500 | (2,2,5) | 0.9963 | 0.0000193 | 0.0542 | 50.66 |
1.5 | 500 | (4,4,5) | 0.9980 | 0.0000100 | 0.0412 | 52.69 |
1.5 | 500 | (8,8,5) | 0.9991 | 0.0000042 | 0.0276 | 56.16 |
1.5 | 500 | (16,16,5) | 0.9992 | 0.0000038 | 0.0257 | 56.95 |
2 | 125 | (2,2,5) | 0.9984 | 0.0000085 | 0.0357 | 55.16 |
2 | 125 | (4,4,5) | 0.9992 | 0.0000033 | 0.0251 | 58.27 |
2 | 125 | (8,8,5) | 0.9997 | 0.0000014 | 0.0164 | 61.59 |
2 | 125 | (16,16,5) | 0.9998 | 0.0000010 | 0.0146 | 62.57 |
2 | 250 | (2,2,5) | 0.9979 | 0.0000101 | 0.0427 | 53.52 |
2 | 250 | (4,4,5) | 0.9988 | 0.0000047 | 0.0321 | 56.07 |
2 | 250 | (8,8,5) | 0.9994 | 0.0000023 | 0.0234 | 58.69 |
2 | 250 | (16,16,5) | 0.9991 | 0.0000037 | 0.0241 | 59.60 |
2 | 500 | (2,2,5) | 0.9973 | 0.0000127 | 0.0511 | 51.89 |
2 | 500 | (4,4,5) | 0.9980 | 0.0000078 | 0.0415 | 53.86 |
2 | 500 | (8,8,5) | 0.9987 | 0.0000051 | 0.0330 | 55.83 |
2 | 500 | (16,16,5) | 0.9990 | 0.0000033 | 0.0265 | 57.98 |
WL | LS | Avg. Acc. | TPHM | Avg. Acc. |
---|---|---|---|---|
0.5 | (2,2,5) | 29.9% | (32,32,5) | 22.4% |
(4,4,5) | 41.5% | |||
(8,8,5) | 49.9% | |||
(16,16,5) | 50.6% | |||
1 | (2,2,5) | 59.0% | (32,32,5) | 44.8% |
(4,4,5) | 72.0% | |||
(8,8,5) | 79.1% | |||
(16,16,5) | 79.9% | |||
1.5 | (2,2,5) | 69.1% | (32,32,5) | 49.0% |
(4,4,5) | 79.8% | |||
(8,8,5) | 83.9% | |||
(16,16,5) | 84.2% | |||
2 | (2,2,5) | 73.3% | (32,32,5) | 54.3% |
(4,4,5) | 82.4% | |||
(8,8,5) | 86.7% | |||
(16,16,5) | 86.6% |
LS | TPHM | p-Value for Accuracy | p-Value for F1-Score | ||||
---|---|---|---|---|---|---|---|
WS 125 ms | WS 250 ms | WS 500 ms | WS 125 ms | WS 250 ms | WS 500 ms | ||
WL 0.5 s | |||||||
(2,2,5) | (32,32,5) | 0.039 * | 0.839 | 0.999 | 2.637 × 10 *** | 0.064 | 0.683 |
(4,4,5) | (32,32,5) | 5.076 × 10 *** | 0.017 * | 0.982 | 1.626 × 10 *** | 9.219 × 10 *** | 0.051 |
(8,8,5) | (32,32,5) | 4.787 × 10 *** | 1.287 × 10 *** | 0.102 | 5.406 × 10 *** | 2.290 × 10 *** | 0.0001 *** |
(16,16,5) | (32,32,5) | 5.264 × 10 *** | 2.103 × 10 *** | 0.009 ** | 6.301 × 10 *** | 6.863 × 10 *** | 2.355 × 10 *** |
WL 1 s | |||||||
(2,2,5) | (32,32,5) | 0.001 ** | 0.492 | 0.994 | 0.0001 *** | 0.071 | 0.752 |
(4,4,5) | (32,32,5) | 5.533 × 10 *** | 0.004 ** | 0.537 | 2.614 × 10 *** | 0.0003 *** | 0.035 * |
(8,8,5) | (32,32,5) | 5.264 × 10 *** | 3.281 × 10 *** | 0.0005 *** | 1.246 × 10 *** | 4.817 × 10 *** | 2.096 × 10 *** |
(16,16,5) | (32,32,5) | 4.352 × 10 *** | 4.787 × 10 *** | 4.120 × 10 *** | 8.376 × 10 *** | 1.226 × 10 *** | 2.591 × 10 *** |
WL 1.5 s | |||||||
(2,2,5) | (32,32,5) | 0.0004 *** | 0.239 | 0.969 | 4.675 × 10 *** | 0.112 | 0.322 |
(4,4,5) | (32,32,5) | 1.112 × 10 *** | 0.0002 *** | 0.281 | 8.254 × 10 *** | 0.0001 *** | 0.003 ** |
(8,8,5) | (32,32,5) | 3.954 × 10 *** | 5.533 × 10 *** | 0.0003 *** | 8.281 × 10 *** | 3.642 × 10 *** | 2.555 × 10 *** |
(16,16,5) | (32,32,5) | 4.352 × 10 *** | 1.219 × 10 *** | 3.392 × 10 *** | 8.445 × 10 *** | 1.797 × 10 *** | 1.929 × 10 *** |
WL 2 s | |||||||
(2,2,5) | (32,32,5) | 0.001 ** | 0.043 * | 0.631 | 0.0002 *** | 0.010 * | 0.050 |
(4,4,5) | (32,32,5) | 3.582 × 10 *** | 0.001 ** | 0.139 | 4.083 × 10 *** | 0.0006 *** | 0.008 ** |
(8,8,5) | (32,32,5) | 6.443 × 10 *** | 1.756 × 10 *** | 0.001 ** | 1.211 × 10 *** | 1.843 × 10 *** | 0.0001 *** |
(16,16,5) | (32,32,5) | 3.954 × 10 *** | 5.264 × 10 *** | 0.0001 *** | 1.241 × 10 *** | 1.226 × 10 *** | 2.363 × 10 *** |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chikkankod, A.V.; Longo, L. On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps. Mach. Learn. Knowl. Extr. 2022, 4, 1042-1064. https://doi.org/10.3390/make4040053
Chikkankod AV, Longo L. On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps. Machine Learning and Knowledge Extraction. 2022; 4(4):1042-1064. https://doi.org/10.3390/make4040053
Chicago/Turabian StyleChikkankod, Arjun Vinayak, and Luca Longo. 2022. "On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps" Machine Learning and Knowledge Extraction 4, no. 4: 1042-1064. https://doi.org/10.3390/make4040053
APA StyleChikkankod, A. V., & Longo, L. (2022). On the Dimensionality and Utility of Convolutional Autoencoder’s Latent Space Trained with Topology-Preserving Spectral EEG Head-Maps. Machine Learning and Knowledge Extraction, 4(4), 1042-1064. https://doi.org/10.3390/make4040053