Abstract
Traditional medical visual question answering approaches require a large amount of labeled data for training, but still cannot jointly consider both image and text information. To address this issue, we propose a novel framework called Knowledge Embedded Meta-Learning. In particular, we present a deep relation network to capture and memorize the relation among different samples. First, we introduce the embedding approach to perform feature fusion representation learning. Then, we present the construction of our knowledge graph that relates image with text, as the guidance of our meta-learner. We design a knowledge embedding mechanism to incorporate the knowledge representation into our network. Final result is derived from our relation network by learning to compare the features of samples. Experimental results demonstrate that the proposed approach achieves significantly higher performance compared with other state-of-the-arts.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ben Abacha, A., Hasan, S.A., Datla, V.V., Liu, J., Demner-Fushman, D., Müller, H.: VQA-Med: overview of the medical visual question answering task at ImageCLEF 2019. In: CLEF2019 Working Notes. CEUR Workshop Proceedings (2019)
Ben-Younes, H., Cadene, R., Thome, N., Cord, M.: Block: bilinear superdiagonal fusion for visual question answering and visual relationship detection. In: The Thirty-Third AAAI Conference on Artificial Intelligence (2019)
Cadene, R., Ben-younes, H., Cord, M., Thome, N.: Murel: multimodal relational reasoning for visual question answering. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 4171–4186. Minneapolis, Minnesota, June 2019
Do, T., Do, T.T., Tran, H., Tjiputra, E., Tran, Q.D.: Compact trilinear interaction for visual question answering. In: The IEEE International Conference on Computer Vision (ICCV), October 2019
Fu, J., Zheng, H., Mei, T.: Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017
Gao, P., et al.: Dynamic fusion with intra- and inter-modality attention flow for visual question answering. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Gao, P., You, H., Zhang, Z., Wang, X., Li, H.: Multi-modality latent interaction network for visual question answering. In: The IEEE International Conference on Computer Vision (ICCV), October 2019
Lin, M., Chen, Q., Yan, S.: Network in network. In: 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014)
Marino, K., Salakhutdinov, R., Gupta, A.: The more you know: using knowledge graphs for image classification. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017
Shah, M., Chen, X., Rohrbach, M., Parikh, D.: Cycle-consistency for robust visual question answering. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations, May 2015
Yu, Z., Yu, J., Cui, Y., Tao, D., Tian, Q.: Deep modular co-attention networks for visual question answering. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Acknowledgment
This work is supported in part by the Key Research and Development Program of Guangzhou (202007050002), in part by the National Natural Science Foundation of China (61806198, 61533019, U1811463), and in part by the National Key Research and Development Program of China (No. 2018AAA0101502).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zheng, W., Yan, L., Wang, FY., Gou, C. (2020). Learning from the Guidance: Knowledge Embedded Meta-learning for Medical Visual Question Answering. In: Yang, H., Pasupa, K., Leung, A.CS., Kwok, J.T., Chan, J.H., King, I. (eds) Neural Information Processing. ICONIP 2020. Communications in Computer and Information Science, vol 1332. Springer, Cham. https://doi.org/10.1007/978-3-030-63820-7_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-63820-7_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-63819-1
Online ISBN: 978-3-030-63820-7
eBook Packages: Computer ScienceComputer Science (R0)