default search action
4th SustaiNLP 2023: Toronto, Canada (Hybrid)
- Nafise Sadat Moosavi, Iryna Gurevych, Yufang Hou, Gyuwan Kim, Young Jin Kim, Tal Schuster, Ameeta Agrawal:
Proceedings of The Fourth Workshop on Simple and Efficient Natural Language Processing, SustaiNLP 2023, Toronto, Canada (Hybrid), July 13, 2023. Association for Computational Linguistics 2023, ISBN 978-1-959429-79-1 - Sandeep Silwal, Sara Ahmadian, Andrew Nystrom, Andrew McCallum, Deepak Ramachandran, Seyed Mehran Kazemi:
KwikBucks: Correlation Clustering with Cheap-Weak and Expensive-Strong Signals. 1-31 - Yanchen Liu, Timo Schick, Hinrich Schütze:
Semantic-Oriented Unlabeled Priming for Large-Scale Language Models. 32-38 - Daniel Campos, Alexandre Marques, Mark Kurtz, ChengXiang Zhai:
oBERTa: Improving Sparse Transfer Learning via improved initialization, distillation, and pruning regimes. 39-58 - Daniel Campos, Alessandro Magnani, Chengxiang Zhai:
Quick Dense Retrievers Consume KALE: Post Training KullbackLeibler Alignment of Embeddings for Asymmetrical dual encoders. 59-77 - Sho Takase, Shun Kiyono:
Lessons on Parameter Sharing across Layers in Transformers. 78-90 - Daniel Campos, Chengxiang Zhai:
To Asymmetry and Beyond: Structured Pruning of Sequence to Sequence Models for Improved Inference Efficiency. 91-109 - Dantong Liu, Kaushik Pavani, Sunny Dasgupta:
Small is the New Big: Pre-finetuned compact models are better for Asynchronous Active Learning. 110-120 - Aditya Shah, Surendrabikram Thapa, Aneesh Jain, Lifu Huang:
ADEPT: Adapter-based Efficient Prompt Tuning Approach for Language Models. 121-128 - Jean-Michel Attendu, Jean-Philippe Corbeil:
NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification Tasks. 129-146 - Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
On the Interactions of Structural Constraints and Data Resources for Structured Prediction. 147-157 - Joel Niklaus, Daniele Giofré:
Can we Pretrain a SotA Legal Language Model on a Budget From Scratch? 158-182 - Chenyang Lyu, Tianbo Ji, Yvette Graham, Jennifer Foster:
Is a Video worth n n Images? A Highly Efficient Approach to Transformer-based Video Question Answering. 183-189 - Xin Xu, Yuqi Zhu, Xiaohan Wang, Ningyu Zhang:
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction? 190-200 - Jay Mohta:
Prompting language models improves performance in imbalanced setting. 201-211 - Nick McKenna, Priyanka Sen:
KGQA Without Retraining. 212-218 - Shashank Sonkar, Zichao Wang, Richard G. Baraniuk:
MANER: Mask Augmented Named Entity Recognition for Extreme Low-Resource Languages. 219-226 - Peggy Tang, Junbin Gao, Lei Zhang, Zhiyong Wang:
Efficient and Interpretable Compressive Text Summarisation with Unsupervised Dual-Agent Reinforcement Learning. 227-238 - Gregory Szumel, Ghazal Khalighinejad, Rickard Stureborg, Sam Wiseman:
Exploring the Effect of Frequency Resolution in FNet. 239-244 - Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag:
Towards Adaptable and Interactive Image Captioning with Data Augmentation and Episodic Memory. 245-256 - Ameeta Agrawal, Suresh Singh:
Corpus Complexity Matters in Pretraining Language Models. 257-263 - Xu Han, Bin Guo, Yoon Jung, Benjamin Yao, Yu Zhang, Xiaohu Liu, Chenlei Guo:
PersonaPKT: Building Personalized Dialogue Agents via Parameter-efficient Knowledge Transfer. 264-273 - Ganesh Jawahar, Subhabrata Mukherjee, Debadeepta Dey, Muhammad Abdul-Mageed, Laks V. S. Lakshmanan, Caio C. T. Mendes, Gustavo Henrique de Rosa, Shital Shah:
Small Character Models Match Large Word Models for Autocomplete Under Memory Constraints. 274-289 - Yuxuan Wang, Hong Lyu:
Query Encoder Distillation via Embedding Alignment is a Strong Baseline Method to Boost Dense Retriever Online Efficiency. 290-298 - Benno Kruit:
Minimalist Entity Disambiguation for Mid-Resource Languages. 299-306
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.