Abstract
This paper makes use of diverse domains’ datasets to analyze the impact of image complexity and diversity on the task of transfer learning in deep neural networks. As the availability of labels and quality instances for several domains are still scarce, it is imperative to use the knowledge acquired from similar problems to improve classifier performance by transferring the learned parameters. We performed a statistical analysis through several experiments in which the convolutional neural networks (LeNet-5, AlexNet, VGG-11 and VGG-16) were trained and transferred to different target tasks layer by layer. We show that when working with complex low-quality images and small datasets, fine-tuning the transferred features learned from a low complexity source dataset gives the best results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
J. Deng et al., “Imagenet: A large-scale hierarchical image database”. IEEE Conference on Computer Vision and Pattern Recognition, 2009.
- 2.
L. Fei-Fei, R. Fergus e P. Perona, “Learning generative visual models from few training examples: an incremental Bayesian approach tested on 101 object categories”, Computer Vision and Image Understanding, 2007.
- 3.
G. Griffin, A. Holub e P. Perona, “Caltech-256 object category dataset”, 2007.
- 4.
K. Saenko, B. Kulis, M. Fritz e T. Darrell, “Adapting visual category models to new domains”, Computer Vision-ECCV, pp. 213–226, 2010.
- 5.
P. Welinder et al., “Caltech-ucsd birds 200”, 2010.
References
Lu, J., Behbood, V., Hao, P., Zuo, H., Xue, S., Zhang, G.: Transfer learning using computational intelligence: a survey. Knowl.-Based Syst. 80, 14–23 (2015)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICLR), San Diego (2015)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks?. In: Advances in Neural Information Processing Systems (NIPS 2014), vol. 27, Montral (2014)
Soekhoe, D., van der Putten, P., Plaat, A.: On the Impact of data set Size in transfer learning using deep neural networks. In: The 15th International Symposium on Intelligent Data Analysis, Stockholm (2016)
Palumbo, L., Ogden, R., Makin, A.D.J.: Examining visual complexity and its influence on perceived duration. J. Vis. 14, 3 (2014)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278–2324 (1998)
LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning, Granada, Spain (2011)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. University of Toronto, Toronto (2009)
Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, Columbus (2014)
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). doi:10.1007/978-3-319-10590-1_53
Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (2014)
Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., Darrell, T.: DeCAF: a deep convolutional activation feature for generic visual recognition. In: International Conference on Machine Learning (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Wanderley, M.D.d.S., Bueno, L.d.A.e., Zanchettin, C., Oliveira, A.L.I. (2017). The Impact of Dataset Complexity on Transfer Learning over Convolutional Neural Networks. In: Lintas, A., Rovetta, S., Verschure, P., Villa, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2017. ICANN 2017. Lecture Notes in Computer Science(), vol 10614. Springer, Cham. https://doi.org/10.1007/978-3-319-68612-7_66
Download citation
DOI: https://doi.org/10.1007/978-3-319-68612-7_66
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-68611-0
Online ISBN: 978-3-319-68612-7
eBook Packages: Computer ScienceComputer Science (R0)