Abstract
The Recently proposed CapsNet has attracted the attention of many researchers. It is a potential alternative to convolutional neural networks (CNNs) and achieves significant increase in performance on some simple datasets like MNIST. However, CapsNet gets a poor performance on more complex datasets like CIFAR-10. To address this problem, we focus on the improvement of the original CapsNet from both the network structure and the dynamic routing mechanism. A new CapsNet architecture aiming at complex data called Capsule Network based on Deep Dynamic Routing Mechanism (DDRM-CapsNet) is proposed. For the purpose of extracting better features, we increase the number of convolutional layers before capsule layer in the encoder. We also improve the dynamic routing mechanism in the original CapsNet by expanding it into two stages and increasing the dimensionality of the final output vector. To verify the efficacy of our proposed network on complex data, we conduct experiments with a single model without using any ensembled methods and data augmentation techniques on five real-world complex datasets. The experimental results demonstrate that our proposed method achieves better accuracy results than the baseline and can still improve the reconstruction performance on the premise of using the same decoder structure as the original CapsNet.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012). https://doi.org/10.1145/3065386
Zeiler, Matthew D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)
Zeiler, M.D., Fergus, R.: Stochastic pooling for regularization of deep convolutional neural networks. arXiv preprint arXiv:1301.3557 (2013)
Xi, E., Bing, S., Jin, Y.: Capsule network performance on complex data. arXiv preprint arXiv:1712.03480 (2017)
Mukhometzianov, R., Carrillo, J.: CapsNet comparative performance evaluation for image classification. arXiv preprint arXiv:1805.11195 (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015). https://doi.org/10.1109/cvpr.2015.7298594
Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)
Georghiades, A.S., Belhumeur, P.N., Kriegman, D.J.: From few to many: illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intell. 23(6), 643–660 (2001). https://doi.org/10.1109/34.927464
Lee, K.C., Ho, J., Kriegman, D.J.: Acquiring linear subspaces for face recognition under variable lighting. IEEE Trans. Pattern Anal. Mach. Intell. 27(5), 684–698 (2005). https://doi.org/10.1109/TPAMI.2005.92
Timofte, R., Zimmerman, K., Van Gool, L.: Multi-view traffic sign detection, recognition, and 3D localization. Mach. Vis. Appl. 25(3), 633–647 (2014). https://doi.org/10.1007/s00138-011-0391-3
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Liu, Jw. et al. (2019). DDRM-CapsNet: Capsule Network Based on Deep Dynamic Routing Mechanism for Complex Data. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. ICANN 2019. Lecture Notes in Computer Science(), vol 11727. Springer, Cham. https://doi.org/10.1007/978-3-030-30487-4_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-30487-4_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30486-7
Online ISBN: 978-3-030-30487-4
eBook Packages: Computer ScienceComputer Science (R0)