Abstract
Transcreators extract crucial information from text written in one language for a specific media type and translate this text into a different language and a different media type. Multiple factors drive changes in narrative structures in different languages for different media platforms. AI-based approaches can be used to extract critical information elements from text and augment human analysis and insight to facilitate transcreation. In this study, we apply self-supervised learning and active few-shot learning based on generative pretrained transformer models (e.g. GPT-N) to perform information extraction. We also used Wikifier (https://wikifier.org/) to annotate the related text with links to relevant Wikipedia concepts to augment human users with additional explanations. The performance statistics were collected using four news stories, and the results show that self-supervised approach is error-prone because the GPT-3 pretrained language model can generate synthetic information based on patterns learned from its huge training corpus instead of reflecting only relevant facts in the prompted text. On the other hand, active few-shot learning worked very well with 87.5% accuracy on the experimental examples. Wikifier also provides a large number of correct and useful links to named entities such as human names, locations, organizations, and concepts. Transcreators can leverage these AI tools to augment their ability to effectively perform their tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)
Brown, T.B., et al.: Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020)
Settles, B.: Active learning literature survey (2009)
Wang, Y., Yao, Q., Kwok, J.T., Ni, L.M.: Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. (CSUR) 53(3), 1–34 (2020)
Wikifier semantic annotation service for 100 languages. https://wikifier.org/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Qian, M., Zhu, E. (2022). Extracting and Re-mapping Narrative Text Structure Elements Between Languages Using Self-supervised and Active Few-Shot Learning. In: Degen, H., Ntoa, S. (eds) Artificial Intelligence in HCI. HCII 2022. Lecture Notes in Computer Science(), vol 13336. Springer, Cham. https://doi.org/10.1007/978-3-031-05643-7_38
Download citation
DOI: https://doi.org/10.1007/978-3-031-05643-7_38
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-05642-0
Online ISBN: 978-3-031-05643-7
eBook Packages: Computer ScienceComputer Science (R0)