{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,9,10]],"date-time":"2024-09-10T17:48:37Z","timestamp":1725990517443},"publisher-location":"New York, NY, USA","reference-count":43,"publisher":"ACM","funder":[{"name":"Industry Alignment Fund ? Pre-positioning (IAF-PP) Funding Initiative","award":["SDSC-2020-001"]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,5,16]]},"DOI":"10.1145\/3524610.3527897","type":"proceedings-article","created":{"date-parts":[[2022,10,20]],"date-time":"2022-10-20T15:19:30Z","timestamp":1666279170000},"update-policy":"http:\/\/dx.doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":26,"title":["PTM4Tag"],"prefix":"10.1145","author":[{"given":"Junda","family":"He","sequence":"first","affiliation":[{"name":"Singapore Management University"}]},{"given":"Bowen","family":"Xu","sequence":"additional","affiliation":[{"name":"Singapore Management University"}]},{"given":"Zhou","family":"Yang","sequence":"additional","affiliation":[{"name":"Singapore Management University"}]},{"given":"DongGyun","family":"Han","sequence":"additional","affiliation":[{"name":"Singapore Management University"}]},{"given":"Chengran","family":"Yang","sequence":"additional","affiliation":[{"name":"Singapore Management University"}]},{"given":"David","family":"Lo","sequence":"additional","affiliation":[{"name":"Singapore Management University"}]}],"member":"320","published-online":{"date-parts":[[2022,10,20]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-012-9231-y"},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Iz Beltagy Kyle Lo and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. arXiv:1903.10676 [cs.CL] Iz Beltagy Kyle Lo and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. arXiv:1903.10676 [cs.CL]","DOI":"10.18653\/v1\/D19-1371"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D15-1075"},{"key":"e_1_3_2_1_4_1","unstructured":"Luca Buratti Saurabh Pujar Mihaela Bornea Scott McCarley Yunhui Zheng Gaetano Rossiello Alessandro Morari Jim Laredo Veronika Thost Yufan Zhuang etal 2020. Exploring software naturalness through neural language models. arXiv preprint arXiv:2006.12641 (2020). Luca Buratti Saurabh Pujar Mihaela Bornea Scott McCarley Yunhui Zheng Gaetano Rossiello Alessandro Morari Jim Laredo Veronika Thost Yufan Zhuang et al. 2020. Exploring software naturalness through neural language models. arXiv preprint arXiv:2006.12641 (2020)."},{"key":"e_1_3_2_1_5_1","volume-title":"BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs\/1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs\/1810.04805 (2018). arXiv:1810.04805 http:\/\/arxiv.org\/abs\/1810.04805 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR abs\/1810.04805 (2018). arXiv:1810.04805 http:\/\/arxiv.org\/abs\/1810.04805"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2020.findings-emnlp.139"},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.acl-long.442"},{"key":"e_1_3_2_1_8_1","unstructured":"Kexin Huang Jaan Altosaar and Rajesh Ranganath. 2020. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv:1904.05342 [cs.CL] Kexin Huang Jaan Altosaar and Rajesh Ranganath. 2020. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv:1904.05342 [cs.CL]"},{"key":"e_1_3_2_1_9_1","unstructured":"Hamel Husain Ho-Hsiang Wu Tiferet Gazit Miltiadis Allamanis and Marc Brockschmidt. 2020. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. arXiv:1909.09436 [cs.LG] Hamel Husain Ho-Hsiang Wu Tiferet Gazit Miltiadis Allamanis and Marc Brockschmidt. 2020. CodeSearchNet Challenge: Evaluating the State of Semantic Code Search. arXiv:1909.09436 [cs.LG]"},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i05.6311"},{"key":"e_1_3_2_1_11_1","volume-title":"Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research","author":"Kanade Aditya","year":"2020","unstructured":"Aditya Kanade , Petros Maniatis , Gogul Balakrishnan , and Kensen Shi . 2020 . Learning and Evaluating Contextual Embedding of Source Code . In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research , Vol. 119), Hal Daum\u00e9 III and Aarti Singh (Eds.). PMLR, 5110--5121. https:\/\/proceedings.mlr.press\/v119\/kanade20a.html Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Learning and Evaluating Contextual Embedding of Source Code. In Proceedings of the 37th International Conference on Machine Learning (Proceedings of Machine Learning Research, Vol. 119), Hal Daum\u00e9 III and Aarti Singh (Eds.). PMLR, 5110--5121. https:\/\/proceedings.mlr.press\/v119\/kanade20a.html"},{"key":"e_1_3_2_1_12_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv:1909.11942 [cs.CL]","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv:1909.11942 [cs.CL] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv:1909.11942 [cs.CL]"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/btz682"},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jss.2020.110783"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSE43902.2021.00040"},{"key":"e_1_3_2_1_16_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019)."},{"key":"e_1_3_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR52588.2021.00063"},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"crossref","unstructured":"Marco Ortu Giuseppe Destefanis Alessandro Murgia Roberto Tonelli Michele Marchesi and Bram Adams. 2015. The JIRA Repository Dataset: Understanding Social Aspects of Software Development. Marco Ortu Giuseppe Destefanis Alessandro Murgia Roberto Tonelli Michele Marchesi and Bram Adams. 2015. The JIRA Repository Dataset: Understanding Social Aspects of Software Development.","DOI":"10.1145\/2810146.2810147"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-020-1647-3"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331341"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"crossref","unstructured":"Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv:1908.10084 [cs.CL] Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv:1908.10084 [cs.CL]","DOI":"10.18653\/v1\/D19-1410"},{"key":"e_1_3_2_1_22_1","volume-title":"Deep learning in neural networks: An overview. Neural networks 61","author":"Schmidhuber J\u00fcrgen","year":"2015","unstructured":"J\u00fcrgen Schmidhuber . 2015. Deep learning in neural networks: An overview. Neural networks 61 ( 2015 ), 85--117. J\u00fcrgen Schmidhuber. 2015. Deep learning in neural networks: An overview. Neural networks 61 (2015), 85--117."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00130"},{"key":"e_1_3_2_1_24_1","unstructured":"Chi Sun Xipeng Qiu Yige Xu and Xuanjing Huang. 2020. How to Fine-Tune BERT for Text Classification? arXiv:1905.05583 [cs.CL] Chi Sun Xipeng Qiu Yige Xu and Xuanjing Huang. 2020. How to Fine-Tune BERT for Text Classification? arXiv:1905.05583 [cs.CL]"},{"key":"e_1_3_2_1_25_1","volume-title":"Shengyu Fu, and Neel Sundaresan.","author":"Svyatkovskiy Alexey","year":"2020","unstructured":"Alexey Svyatkovskiy , Shao Kun Deng , Shengyu Fu, and Neel Sundaresan. 2020 . IntelliCode Compose: Code Generation Using Transformer . arXiv:2005.08025 [cs.CL] Alexey Svyatkovskiy, Shao Kun Deng, Shengyu Fu, and Neel Sundaresan. 2020. IntelliCode Compose: Code Generation Using Transformer. arXiv:2005.08025 [cs.CL]"},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"crossref","unstructured":"Jeniya Tabassum Mounica Maddela Wei Xu and Alan Ritter. 2020. Code and Named Entity Recognition in StackOverflow. arXiv:2005.01634 [cs.CL] Jeniya Tabassum Mounica Maddela Wei Xu and Alan Ritter. 2020. Code and Named Entity Recognition in StackOverflow. arXiv:2005.01634 [cs.CL]","DOI":"10.18653\/v1\/2020.acl-main.443"},{"key":"e_1_3_2_1_27_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008. Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998--6008."},{"key":"e_1_3_2_1_28_1","volume-title":"On the validity of pre-trained transformers for natural language processing in the software engineering domain. CoRR abs\/2109.04738","author":"von der Mosel Julian","year":"2021","unstructured":"Julian von der Mosel , Alexander Trautsch , and Steffen Herbold . 2021. On the validity of pre-trained transformers for natural language processing in the software engineering domain. CoRR abs\/2109.04738 ( 2021 ). arXiv:2109.04738 Julian von der Mosel, Alexander Trautsch, and Steffen Herbold. 2021. On the validity of pre-trained transformers for natural language processing in the software engineering domain. CoRR abs\/2109.04738 (2021). arXiv:2109.04738"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSME.2014.51"},{"key":"e_1_3_2_1_30_1","volume-title":"An enhanced tag recommendation system for software information sites. Empirical Software Engineering 23 (04","author":"Wang Shaowei","year":"2018","unstructured":"Shaowei Wang , David Lo , Bogdan Vasilescu , and Alexander Serebrenik . 2018. En TagRec ++ : An enhanced tag recommendation system for software information sites. Empirical Software Engineering 23 (04 2018 ). Shaowei Wang, David Lo, Bogdan Vasilescu, and Alexander Serebrenik. 2018. EnTagRec ++: An enhanced tag recommendation system for software information sites. Empirical Software Engineering 23 (04 2018)."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10664-017-9533-1"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-015-1578-2"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11390-015-1578-2"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N18-1101"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.5555\/2487085.2487140"},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSR.2013.6624040"},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSE.2021.3093761"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER53432.2022.00054"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSME46990.2020.00017"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.infsof.2019.01.002"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/SANER.2017.7884628"},{"key":"e_1_3_2_1_42_1","volume-title":"Assessing Generalizability of CodeBERT. In 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 425--436","author":"Zhou Xin","year":"2021","unstructured":"Xin Zhou , DongGyun Han , and David Lo . 2021 . Assessing Generalizability of CodeBERT. In 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 425--436 . Xin Zhou, DongGyun Han, and David Lo. 2021. Assessing Generalizability of CodeBERT. In 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, 425--436."},{"key":"e_1_3_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.11"}],"event":{"name":"ICPC '22: 30th International Conference on Program Comprehension","location":"Virtual Event","acronym":"ICPC '22","sponsor":["SIGSOFT ACM Special Interest Group on Software Engineering","IEEE CS"]},"container-title":["Proceedings of the 30th IEEE\/ACM International Conference on Program Comprehension"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3524610.3527897","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,5,16]],"date-time":"2023-05-16T10:35:38Z","timestamp":1684233338000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3524610.3527897"}},"subtitle":["sharpening tag recommendation of stack overflow posts with pre-trained models"],"short-title":[],"issued":{"date-parts":[[2022,5,16]]},"references-count":43,"alternative-id":["10.1145\/3524610.3527897","10.1145\/3524610"],"URL":"https:\/\/doi.org\/10.1145\/3524610.3527897","relation":{},"subject":[],"published":{"date-parts":[[2022,5,16]]},"assertion":[{"value":"2022-10-20","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}