Semantic-aware entity alignment for low resource language knowledge graph | Frontiers of Computer Science Skip to main content
Log in

Semantic-aware entity alignment for low resource language knowledge graph

  • Research Article
  • Published:
Frontiers of Computer Science Aims and scope Submit manuscript

Abstract

Entity alignment (EA) is an important technique aiming to find the same real entity between two different source knowledge graphs (KGs). Current methods typically learn the embedding of entities for EA from the structure of KGs for EA. Most EA models are designed for rich-resource languages, requiring sufficient resources such as a parallel corpus and pre-trained language models. However, low-resource language KGs have received less attention, and current models demonstrate poor performance on those low-resource KGs. Recently, researchers have fused relation information and attributes for entity representations to enhance the entity alignment performance, but the relation semantics are often ignored. To address these issues, we propose a novel Semantic-aware Graph Neural Network (SGNN) for entity alignment. First, we generate pseudo sentences according to the relation triples and produce representations using pre-trained models. Second, our approach explores semantic information from the connected relations by a graph neural network. Our model captures expanded feature information from KGs. Experimental results using three low-resource languages demonstrate that our proposed SGNN approach out performs better than state-of-the-art alignment methods on three proposed datasets and three public datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Yang B, Mitchell T. Leveraging knowledge bases in LSTMs for improving machine reading. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2019, 1436–1446

  2. Cao Y, Hou L, Li J, Liu Z. Neural collective entity linking. In: Proceedings of the 27th International Conference on Computational Linguistics. 2018, 675–686

  3. Yang Z, Qi P, Zhang S, Bengio Y, Cohen W, Salakhutdinov R, Manning C D. HotpotQA: a dataset for diverse, explainable multi-hop question answering. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 2369–2380

  4. Franco-Salvador M, Rosso P, Montes-y-Gómez M. A systematic study of knowledge graph analysis for cross-language plagiarism detection. Information Processing & Management, 2016, 52(4): 550–570

    Article  Google Scholar 

  5. Lehmann J, Isele R, Jakob M, Jentzsch A, Kontokostas D, Mendes P N, Hellmann S, Morsey M, van Kleef P, Auer S, Bizer C. DBpedia–A large-scale, multilingual knowledge base extracted from Wikipedia. Semantic Web, 2015, 6(2): 167–195

    Article  Google Scholar 

  6. Suchanek F M, Kasneci G, Weikum G. Yago: a core of semantic knowledge. In: Proceedings of the 16th International Conference on World Wide Web. 2007, 697–706

  7. Navigli R, Ponzetto S P. BabelNet: the automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 2012, 193: 217–250

    Article  MathSciNet  Google Scholar 

  8. Chen M, Tian Y, Yang M, Zaniolo C. Multilingual knowledge graph embeddings for cross-lingual knowledge alignment. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 1511–1517

  9. Sun Z, Hu W, Li C. Cross-lingual entity alignment via joint attribute-preserving embedding. In: Proceedings of the 16th International Semantic Web Conference. 2017, 628–644

  10. Veličković P, Cucurull G, Casanova A, Romero A, Liò P, Bengio Y. Graph attention networks. In: Proceedings of the ICLR 2018. 2018

  11. Conneau A, Khandelwal K, Goyal N, Chaudhary V, Wenzek G, Guzmán F, Grave E, Ott M, Zettlemoyer L, Stoyanov V. Unsupervised cross-lingual representation learning at scale. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 8440–8451

  12. Petroni F, Rocktäschel T, Riedel S, Lewis P, Bakhtin A, Wu Y, Miller A H, Riedel S. Language models as knowledge bases? In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 2463–2473

  13. Jiang Z, Anastasopoulos A, Araki J, Ding H, Neubig G. X-FACTR: multilingual factual knowledge retrieval from pretrained language models. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. 2020, 5943–5959

  14. Sanchez-Gonzalez A, Heess N, Springenberg J T, Merel J, Riedmiller M A, Hadsell R, Battaglia P W. Graph networks as learnable physics engines for inference and control. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 4467–4476

  15. Wu Y, Lian D, Xu Y, Wu L, Chen E. Graph convolutional networks with Markov random field reasoning for social spammer detection. In: Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 1054–1061

  16. Fout A, Byrd J, Shariat B, Ben-Hur A. Protein interface prediction using graph convolutional networks. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 6533–6542

  17. Dai H, Khalil E B, Zhang Y, Dilkina B, Song L. Learning combinatorial optimization algorithms over graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 6351–6361

  18. Kipf T N, Welling M. Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. 2017

  19. Li Y, Tarlow D, Brockschmidt M, Zemel R S. Gated graph sequence neural networks. In: Proceedings of the 4th International Conference on Learning Representations. 2016

  20. Xu K, Hu W, Leskovec J, Jegelka S. How powerful are graph neural networks? In: Proceedings of the 7th International Conference on Learning Representations. 2019

  21. Defferrard M, Bresson X, Vandergheynst P. Convolutional neural networks on graphs with fast localized spectral filtering. In: Proceedings of the 30th International Conference on Neural Information Processing Systems. 2016, 3844–3852

  22. Hamilton W L, Ying R, Leskovec J. Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 1025–1035

  23. Bordes A, Usunier N, Garcia-Durán A, Weston J, Yakhnenko O. Translating embeddings for modeling multi-relational data. In: Proceedings of the 26th International Conference on Neural Information Processing Systems. 2013, 2787–2795

  24. Ji S, Pan S, Cambria E, Marttinen P, Yu P S. A survey on knowledge graphs: representation, acquisition, and applications. IEEE Transactions on Neural Networks and Learning Systems, 2022, 33(2): 494–514

    Article  MathSciNet  Google Scholar 

  25. Wang Z, Zhang J, Feng J, Chen Z. Knowledge graph embedding by translating on hyperplanes. In: Proceedings of the 28th AAAI Conference on Artificial Intelligence. 2014, 1112–1119

  26. Lin Y, Liu Z, Sun M, Liu Y, Zhu X. Learning entity and relation embeddings for knowledge graph completion. In: Proceedings of the 29th AAAI Conference on Artificial Intelligence. 2015, 2181–2187

  27. Zhu H, Xie R, Liu Z, Sun M. Iterative entity alignment via joint knowledge embeddings. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 4258–4264

  28. Sun Z, Hu W, Zhang Q, Qu Y. Bootstrapping entity alignment with knowledge graph embedding. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence. 2018, 4396–4402

  29. Zhu Q, Zhou X, Wu J, Tan J, Guo L. Neighborhood-aware attentional representation for multilingual knowledge graphs. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 1943–1949

  30. Wang Z, Lv Q, Lan X, Zhang Y. Cross-lingual knowledge graph alignment via graph convolutional networks. In: Proceedings of 2018 Conference on Empirical Methods in Natural Language Processing. 2018, 349–357

  31. Cao Y, Liu Z, Li C, Liu Z, Li J, Chua T S. Multi-channel graph neural network for entity alignment. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019, 1452–1461

  32. Xu K, Wang L, Yu M, Feng Y, Song Y, Wang Z, Yu D. Cross-lingual knowledge graph alignment via graph matching neural network. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2019, 3156–3161

  33. Wu Y, Liu X, Feng Y, Wang Z, Zhao D. Neighborhood matching network for entity alignment. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020, 6477–6487

  34. Liu Z, Cao Y, Pan L, Li J, Chua T S. Exploring and evaluating attributes, values, and structures for entity alignment. In: Proceedings of 2020 Conference on Empirical Methods in Natural Language Processing. 2020, 6355–6364

  35. Wu Y, Liu X, Feng Y, Wang Z, Yan R, Zhao D. Relation-aware entity alignment for heterogeneous knowledge graphs. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence. 2019, 5278–5284

  36. Wu Y, Liu X, Feng Y, Wang Z, Zhao D. Jointly learning entity and relation representations for entity alignment. In: Proceedings of 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. 2019, 240–249

  37. Zhu Y, Liu H, Wu Z, Du Y. Relation-aware neighborhood matching model for entity alignment. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 4749–4756

  38. Yu D, Yang Y, Zhang R, Wu Y. Knowledge embedding based graph convolutional network. In: Proceedings of the Web Conference 2021. 2021, 1619–1628

  39. Duchi J, Hazan E, Singer Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 2011, 12(61): 2121–2159

    MathSciNet  Google Scholar 

  40. Zeiler M D. ADADELTA: an adaptive learning rate method. 2012, arXiv preprint arXiv: 1212.5701

Download references

Acknowledgements

The work was supported by the National Natural Science Foundation of China (Nos. U21B2027, 61972186, 61732005), and Major Science and Technology Projects of Yunnan Province (Nos. 202202AD080003, 202203AA080004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengtao Yu.

Additional information

Junfei Tang received the BS degree from Kunming University of Science and Technology, China in 2018. He is currently pursuing the MS degree in pattern recognition and intelligent system in Kunming University of Science and Technology, China. His current research interest includes knowledge graph, entity alignment.

Ran Song is currently working toward the PhD degree in the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, China. His main research interests include natural language processing, knowledge graph.

Yuxin Huang received his PhD degree in the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, China. His main research interests include natural language processing, machine learning and machine translation.

Shengxiang Gao is a MS tutor of the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, China. Her main research interests include natural language processing, machine learning and machine translation.

Zhengtao Yu is a Professor of the Faculty of Information Engineering and Automation, Kunming University of Science and Technology, China. His main research interests include natural language processing, machine learning and machine translation.

Electronic Supplementary Material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tang, J., Song, R., Huang, Y. et al. Semantic-aware entity alignment for low resource language knowledge graph. Front. Comput. Sci. 18, 184319 (2024). https://doi.org/10.1007/s11704-023-2542-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11704-023-2542-x

Keywords

Navigation