Abstract
Transfer learning through language modeling achieved state-of-the-art results for several natural language processing tasks such as named entity recognition, question answering, and sentiment analysis. However, despite these advancements, some tasks still need more specific solutions. This paper explores different approaches to enhance the performance of Named Entity Recognition (NER) in transformer-based models that have been pre-trained for language modeling. We investigate model soups and domain adaptation methods for Portuguese language entity recognition, providing valuable insights into the effectiveness of these methods in NER performance and contributing to the development of more accurate models. We also evaluate NER performance in few/zero-shot learning settings with a causal language model. In particular, we evaluate diverse BERT-based models trained on different datasets considering general and specific domains. Our results show significant improvements when considering model soup techniques and in-domain pretraining compared to within-task pretraining.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The adapted version refers to a setting called “selective" by the authors, in which only 5 classes are used (PERSON, ORGANIZATION, LOCAL, VALUE and DATE).
- 2.
Here, we used label-wise token replacement (LwTR) [3].
- 3.
We used the hyperparameters for learning rate and batch size suggested by Silva et al. [11].
- 4.
This dataset contains non-public data and cannot be made publicly available.
References
Brown, T., et al.: Language models are few-shot learners. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems, vol. 33, pp. 1877–1901. Curran Associates, Inc. (2020), https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
Chen, Y., Mikkelsen, J., Binder, A., Alt, C., Hennig, L.: A comparative study of pre-trained encoders for low-resource named entity recognition. In: Gella, S., et al (eds.) Proceedings of the 7th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2022, Dublin, Ireland, 26 May 2022, pp. 46–59. Association for Computational Linguistics (2022). https://doi.org/10.18653/v1/2022.repl4nlp-1.6
Dai, X., Adel, H.: An analysis of simple data augmentation for named entity recognition. In: Scott, D., Bel, N., Zong, C. (eds.) Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), 8-13 December 2020, pp. 3861–3867. International Committee on Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.coling-main.343
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. CoRR arXiv: 1810.04805
Fu, J., Liu, P., Neubig, G.: Interpretable multi-dataset evaluation for named entity recognition. In: Webber, B., Cohn, T., He, Y., Liu, Y. (eds.) Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, 16-20 November 2020, pp. 6058–6069. Association for Computational Linguistics (2020). https://doi.org/10.18653/v1/2020.emnlp-main.489
Houlsby, N., et al.: Parameter-efficient transfer learning for NLP. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 2790–2799. PMLR (2019), http://proceedings.mlr.press/v97/houlsby19a.html
Monteiro, M.: Extrator de entidades mencionadas em notícias da mídia. https://github.com/SecexSaudeTCU/noticias_ner (2021), (Accessed 21 May 2022)
Monteiro, M.: Riskdata brazilian portuguese ner. https://huggingface.co/monilouise/ner_news_portuguese (2021), (Accessed 21 May 2022)
Rodrigues, J., et al.: Advancing neural encoding of portuguese with transformer albertina PT-. CoRR https://doi.org/10.48550/arXiv.2305.06721, https://doi.org/10.48550/arXiv.2305.06721 (2023)
Santos, D., Seco, N., Cardoso, N., Vilela, R.: HAREM: an advanced NER evaluation contest for portuguese. In: Calzolari, N., et al. (eds.) Proceedings of the Fifth International Conference on Language Resources and Evaluation, LREC 2006, Genoa, Italy, 22-28 May 2006, pp. 1986–1991. European Language Resources Association (ELRA) (2006), http://www.lrec-conf.org/proceedings/lrec2006/summaries/59.html
Silva, E.H.M.D., Laterza, J., Silva, M.P.P.D., Ladeira, M.: A proposal to identify stakeholders from news for the institutional relationship management activities of an institution based on named entity recognition using BERT. In: Wani, M.A., Sethi, I.K., Shi, W., Qu, G., Raicu, D.S., Jin, R. (eds.) 20th IEEE International Conference on Machine Learning and Applications, ICMLA 2021, Pasadena, CA, USA, 13–16 December 2021, pp. 1569–1575. IEEE (2021). https://doi.org/10.1109/ICMLA52953.2021.00251
Souza, F., Nogueira, R., Lotufo, R.: BERTimbau: pretrained BERT models for Brazilian Portuguese. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12319, pp. 403–417. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61377-8_28
Sun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune BERT for text classification? In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds.) CCL 2019. LNCS (LNAI), vol. 11856, pp. 194–206. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32381-3_16
Tänzer, M., Ruder, S., Rei, M.: Memorisation versus generalisation in pre-trained language models. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 7564–7578. Association for Computational Linguistics, Dublin, Ireland (May 2022). https://doi.org/10.18653/v1/2022.acl-long.521
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L.u., Polosukhin, I.: Attention is all you need. In: Guyon, I., et al. (eds.) Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc. (2017). https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf
Wagner Filho, J.A., Wilkens, R., Idiart, M., Villavicencio, A.: The brWaC corpus: a new open resource for Brazilian Portuguese. In: Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA), Miyazaki, Japan (May 2018). https://aclanthology.org/L18-1686
Wortsman, M., et al.: Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time. CoRR abs/2203.05482 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Monteiro, M., Zanchettin, C. (2023). Optimization Strategies for BERT-Based Named Entity Recognition. In: Naldi, M.C., Bianchi, R.A.C. (eds) Intelligent Systems. BRACIS 2023. Lecture Notes in Computer Science(), vol 14197. Springer, Cham. https://doi.org/10.1007/978-3-031-45392-2_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-45392-2_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-45391-5
Online ISBN: 978-3-031-45392-2
eBook Packages: Computer ScienceComputer Science (R0)