default search action
Luke Zettlemoyer
Person information
- affiliation: University of Washington, School of Computer Science & Engineering, Seattle, WA, USA
- award (2016): Presidential Early Career Award for Scientists and Engineers
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c222]Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, Madian Khabsa:
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants. ACL (1) 2024: 749-775 - [c221]Tomasz Limisiewicz, Terra Blevins, Hila Gonen, Orevaoghene Ahia, Luke Zettlemoyer:
MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling. ACL (1) 2024: 15059-15076 - [c220]Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Raghavi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, Kyle Lo:
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. ACL (1) 2024: 15725-15788 - [c219]Dirk Groeneveld, Iz Beltagy, Evan Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi:
OLMo: Accelerating the Science of Language Models. ACL (1) 2024: 15789-15809 - [c218]Jiawei Ma, Po-Yao Huang, Saining Xie, Shang-Wen Li, Luke Zettlemoyer, Shih-Fu Chang, Wen-Tau Yih, Hu Xu:
MoDE: CLIP Data Experts via Clustering. CVPR 2024: 26344-26353 - [c217]Haoqiang Kang, Terra Blevins, Luke Zettlemoyer:
Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models. EACL (1) 2024: 1562-1575 - [c216]Terra Blevins, Tomasz Limisiewicz, Suchin Gururangan, Margaret Li, Hila Gonen, Noah A. Smith, Luke Zettlemoyer:
Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models. EMNLP 2024: 10822-10837 - [c215]Thao Nguyen, Jeffrey Li, Sewoong Oh, Ludwig Schmidt, Jason Weston, Luke Zettlemoyer, Xian Li:
Better Alignment with Instruction Back-and-Forth Translation. EMNLP (Findings) 2024: 13289-13308 - [c214]Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh:
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation. EMNLP 2024: 15134-15158 - [c213]Hu Xu, Po-Yao Huang, Xiaoqing Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Wen-tau Yih, Shang-Wen Li, Saining Xie, Christoph Feichtenhofer:
Altogether: Image Captioning via Re-aligning Alt-text. EMNLP 2024: 19302-19318 - [c212]Yu Meng, Jitin Krishnan, Sinong Wang, Qifan Wang, Yuning Mao, Han Fang, Marjan Ghazvininejad, Jiawei Han, Luke Zettlemoyer:
Representation Deficiency in Masked Language Modeling. ICLR 2024 - [c211]Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer:
Demystifying CLIP Data. ICLR 2024 - [c210]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. ICLR 2024 - [c209]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. ICLR 2024 - [c208]Sewon Min, Suchin Gururangan, Eric Wallace, Weijia Shi, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer:
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore. ICLR 2024 - [c207]Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer:
Detecting Pretraining Data from Large Language Models. ICLR 2024 - [c206]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. ICLR 2024 - [c205]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. NAACL (Short Papers) 2024: 783-791 - [c204]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Richard James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. NAACL-HLT 2024: 8371-8384 - [i205]Terra Blevins, Tomasz Limisiewicz, Suchin Gururangan, Margaret Li, Hila Gonen, Noah A. Smith, Luke Zettlemoyer:
Breaking the Curse of Multilinguality with Cross-lingual Expert Language Models. CoRR abs/2401.10440 (2024) - [i204]Jiacheng Liu, Sewon Min, Luke Zettlemoyer, Yejin Choi, Hannaneh Hajishirzi:
Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens. CoRR abs/2401.17377 (2024) - [i203]Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Raghavi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, Kyle Lo:
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. CoRR abs/2402.00159 (2024) - [i202]Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi:
OLMo: Accelerating the Science of Language Models. CoRR abs/2402.00838 (2024) - [i201]Michael Duan, Anshuman Suri, Niloofar Mireshghallah, Sewon Min, Weijia Shi, Luke Zettlemoyer, Yulia Tsvetkov, Yejin Choi, David Evans, Hannaneh Hajishirzi:
Do Membership Inference Attacks Work on Large Language Models? CoRR abs/2402.07841 (2024) - [i200]Haoqiang Kang, Terra Blevins, Luke Zettlemoyer:
Comparing Hallucination Detection Metrics for Multilingual Generation. CoRR abs/2402.10496 (2024) - [i199]Akari Asai, Zexuan Zhong, Danqi Chen, Pang Wei Koh, Luke Zettlemoyer, Hannaneh Hajishirzi, Wen-tau Yih:
Reliable, Adaptable, and Attributable Language Models with Retrieval. CoRR abs/2403.03187 (2024) - [i198]Tomasz Limisiewicz, Terra Blevins, Hila Gonen, Orevaoghene Ahia, Luke Zettlemoyer:
MYTE: Morphology-Driven Byte Encoding for Better and Fairer Multilingual Language Modeling. CoRR abs/2403.10691 (2024) - [i197]Xuezhe Ma, Xiaomeng Yang, Wenhan Xiong, Beidi Chen, Lili Yu, Hao Zhang, Jonathan May, Luke Zettlemoyer, Omer Levy, Chunting Zhou:
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length. CoRR abs/2404.08801 (2024) - [i196]Jiawei Ma, Po-Yao Huang, Saining Xie, Shang-Wen Li, Luke Zettlemoyer, Shih-Fu Chang, Wen-Tau Yih, Hu Xu:
MoDE: CLIP Data Experts via Clustering. CoRR abs/2404.16030 (2024) - [i195]Vasu Sharma, Karthik Padthe, Newsha Ardalani, Kushal Tirumala, Russell Howes, Hu Xu, Po-Yao Huang, Shang-Wen Li, Armen Aghajanyan, Gargi Ghosh, Luke Zettlemoyer:
Text Quality-Based Pruning for Efficient Training of Language Models. CoRR abs/2405.01582 (2024) - [i194]Maciej Kilian, Varun Jampani, Luke Zettlemoyer:
Computational Tradeoffs in Image Synthesis: Diffusion, Masked-Token, and Next-Token Prediction. CoRR abs/2405.13218 (2024) - [i193]Yushi Hu, Weijia Shi, Xingyu Fu, Dan Roth, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Ranjay Krishna:
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models. CoRR abs/2406.09403 (2024) - [i192]Jeffrey Li, Alex Fang, Georgios Smyrnis, Maor Ivgi, Matt Jordan, Samir Yitzhak Gadre, Hritik Bansal, Etash Kumar Guha, Sedrick Keh, Kushal Arora, Saurabh Garg, Rui Xin, Niklas Muennighoff, Reinhard Heckel, Jean Mercat, Mayee Chen, Suchin Gururangan, Mitchell Wortsman, Alon Albalak, Yonatan Bitton, Marianna Nezhurina, Amro Abbas, Cheng-Yu Hsieh, Dhruba Ghosh, Josh Gardner, Maciej Kilian, Hanlin Zhang, Rulin Shao, Sarah M. Pratt, Sunny Sanyal, Gabriel Ilharco, Giannis Daras, Kalyani Marathe, Aaron Gokaslan, Jieyu Zhang, Khyathi Raghavi Chandu, Thao Nguyen, Igor Vasiljevic, Sham M. Kakade, Shuran Song, Sujay Sanghavi, Fartash Faghri, Sewoong Oh, Luke Zettlemoyer, Kyle Lo, Alaaeldin El-Nouby, Hadi Pouransari, Alexander Toshev, Stephanie Wang, Dirk Groeneveld, Luca Soldaini, Pang Wei Koh, Jenia Jitsev, Thomas Kollar, Alexandros G. Dimakis, Yair Carmon, Achal Dave, Ludwig Schmidt, Vaishaal Shankar:
DataComp-LM: In search of the next generation of training sets for language models. CoRR abs/2406.11794 (2024) - [i191]Luxi He, Yangsibo Huang, Weijia Shi, Tinghao Xie, Haotian Liu, Yue Wang, Luke Zettlemoyer, Chiyuan Zhang, Danqi Chen, Peter Henderson:
Fantastic Copyrighted Beasts and How (Not) to Generate Them. CoRR abs/2406.14526 (2024) - [i190]Boyi Wei, Weijia Shi, Yangsibo Huang, Noah A. Smith, Chiyuan Zhang, Luke Zettlemoyer, Kai Li, Peter Henderson:
Evaluating Copyright Takedown Methods for Language Models. CoRR abs/2406.18664 (2024) - [i189]Weijia Shi, Jaechan Lee, Yangsibo Huang, Sadhika Malladi, Jieyu Zhao, Ari Holtzman, Daogao Liu, Luke Zettlemoyer, Noah A. Smith, Chiyuan Zhang:
MUSE: Machine Unlearning Six-Way Evaluation for Language Models. CoRR abs/2407.06460 (2024) - [i188]Tong Chen, Akari Asai, Niloofar Mireshghallah, Sewon Min, James Grimmelmann, Yejin Choi, Hannaneh Hajishirzi, Luke Zettlemoyer, Pang Wei Koh:
CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation. CoRR abs/2407.07087 (2024) - [i187]Rulin Shao, Jacqueline He, Akari Asai, Weijia Shi, Tim Dettmers, Sewon Min, Luke Zettlemoyer, Pang Wei Koh:
Scaling Retrieval-Based Language Models with a Trillion-Token Datastore. CoRR abs/2407.12854 (2024) - [i186]Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Ghosh, Luke Zettlemoyer, Armen Aghajanyan:
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts. CoRR abs/2407.21770 (2024) - [i185]Thao Nguyen, Jeffrey Li, Sewoong Oh, Ludwig Schmidt, Jason Weston, Luke Zettlemoyer, Xian Li:
Better Alignment with Instruction Back-and-Forth Translation. CoRR abs/2408.04614 (2024) - [i184]Hila Gonen, Terra Blevins, Alisa Liu, Luke Zettlemoyer, Noah A. Smith:
Does Liking Yellow Imply Driving a School Bus? Semantic Leakage in Language Models. CoRR abs/2408.06518 (2024) - [i183]Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, Omer Levy:
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model. CoRR abs/2408.11039 (2024) - [i182]Seonghyeon Ye, Joel Jang, Byeongguk Jeon, Se June Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo:
Latent Action Pretraining from Videos. CoRR abs/2410.11758 (2024) - [i181]Hu Xu, Po-Yao Huang, Xiaoqing Ellen Tan, Ching-Feng Yeh, Jacob Kahn, Christine Jou, Gargi Ghosh, Omer Levy, Luke Zettlemoyer, Wen-tau Yih, Shang-Wen Li, Saining Xie, Christoph Feichtenhofer:
Altogether: Image Captioning via Re-aligning Alt-text. CoRR abs/2410.17251 (2024) - 2023
- [j12]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. Trans. Assoc. Comput. Linguistics 11: 600-616 (2023) - [j11]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. IEEE ACM Trans. Audio Speech Lang. Process. 31: 3112-3126 (2023) - [c203]Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias:
Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI. AAAI 2023: 9837-9845 - [c202]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. ACL (Findings) 2023: 1102-1121 - [c201]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. ACL (Findings) 2023: 2097-2118 - [c200]Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh Hajishirzi:
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations. ACL (1) 2023: 2304-2317 - [c199]Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun:
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. ACL (1) 2023: 2717-2739 - [c198]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Prompting Language Models for Linguistic Structure. ACL (1) 2023: 6649-6663 - [c197]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. ACL (Findings) 2023: 8857-8873 - [c196]Xinyan Yu, Sewon Min, Luke Zettlemoyer, Hannaneh Hajishirzi:
CREPE: Open-Domain Question Answering with False Presuppositions. ACL (1) 2023: 10457-10480 - [c195]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. ACL (1) 2023: 12286-12312 - [c194]Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, Veselin Stoyanov:
Training Trajectories of Language Models Across Scales. ACL (1) 2023: 13711-13738 - [c193]Mikel Artetxe, Vedanuj Goswami, Shruti Bhosale, Angela Fan, Luke Zettlemoyer:
Revisiting Machine Translation for Cross-lingual Classification. EMNLP 2023: 6489-6499 - [c192]Victor Zhong, Weijia Shi, Wen-tau Yih, Luke Zettlemoyer:
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering. EMNLP (Findings) 2023: 7055-7067 - [c191]Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, Jordan L. Boyd-Graber:
Getting MoRE out of Mixture of Language Model Reasoning Experts. EMNLP (Findings) 2023: 8234-8249 - [c190]Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer:
Demystifying Prompts in Language Models via Perplexity Estimation. EMNLP (Findings) 2023: 10136-10148 - [c189]Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer:
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? EMNLP (Findings) 2023: 10994-11005 - [c188]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP 2023: 12076-12100 - [c187]Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa:
XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models. EMNLP 2023: 13142-13152 - [c186]Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer:
CiT: Curation in Training for Effective Vision-Language Data. ICCV 2023: 15134-15143 - [c185]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. ICLR 2023 - [c184]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. ICLR 2023 - [c183]Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz:
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning. ICLR 2023 - [c182]Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer:
Mega: Moving Average Equipped Gated Attention. ICLR 2023 - [c181]Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi:
AGRO: Adversarial discovery of error-prone Groups for Robust Optimization. ICLR 2023 - [c180]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. ICLR 2023 - [c179]Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer:
Scaling Laws for Generative Mixed-Modal Language Models. ICML 2023: 265-279 - [c178]Tim Dettmers, Luke Zettlemoyer:
The case for 4-bit precision: k-bit Inference Scaling Laws. ICML 2023: 7750-7774 - [c177]Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-Tau Yih, Daniel Fried, Sida I. Wang, Tao Yu:
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation. ICML 2023: 18319-18345 - [c176]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih:
Retrieval-Augmented Multimodal Language Modeling. ICML 2023: 39755-39769 - [c175]Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer:
QLoRA: Efficient Finetuning of Quantized LLMs. NeurIPS 2023 - [c174]Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom:
Toolformer: Language Models Can Teach Themselves to Use Tools. NeurIPS 2023 - [c173]Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt:
Stable and low-precision training for large-scale vision-language models. NeurIPS 2023 - [c172]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. NeurIPS 2023 - [c171]Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. NeurIPS 2023 - [c170]Benjamin Muller, Belen Alastruey, Prangthip Hansanti, Elahe Kalbassi, Christophe Ropers, Eric Michael Smith, Adina Williams, Luke Zettlemoyer, Pierre Andrews, Marta R. Costa-jussà:
The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages. WMT 2023: 536-550 - [i180]Hu Xu, Saining Xie, Po-Yao Huang, Licheng Yu, Russell Howes, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer:
CiT: Curation in Training for Effective Vision-Language Data. CoRR abs/2301.02241 (2023) - [i179]Armen Aghajanyan, Lili Yu, Alexis Conneau, Wei-Ning Hsu, Karen Hambardzumyan, Susan Zhang, Stephen Roller, Naman Goyal, Omer Levy, Luke Zettlemoyer:
Scaling Laws for Generative Mixed-Modal Language Models. CoRR abs/2301.03728 (2023) - [i178]Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, Madian Khabsa:
XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models. CoRR abs/2301.10472 (2023) - [i177]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. CoRR abs/2301.12652 (2023) - [i176]Yu Meng, Jitin Krishnan, Sinong Wang, Qifan Wang, Yuning Mao, Han Fang, Marjan Ghazvininejad, Jiawei Han, Luke Zettlemoyer:
Representation Deficiency in Masked Language Modeling. CoRR abs/2302.02060 (2023) - [i175]Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom:
Toolformer: Language Models Can Teach Themselves to Use Tools. CoRR abs/2302.04761 (2023) - [i174]Marjan Ghazvininejad, Hila Gonen, Luke Zettlemoyer:
Dictionary-based Phrase-level Prompting of Large Language Models for Machine Translation. CoRR abs/2302.07856 (2023) - [i173]Bhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Túlio Ribeiro:
ART: Automatic multi-step reasoning and tool-use for large language models. CoRR abs/2303.09014 (2023) - [i172]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i171]Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt:
Stable and low-precision training for large-scale vision-language models. CoRR abs/2304.13013 (2023) - [i170]Haoqiang Kang, Terra Blevins, Luke Zettlemoyer:
Translate to Disambiguate: Zero-shot Multilingual Word Sense Disambiguation with Pretrained Language Models. CoRR abs/2304.13803 (2023) - [i169]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. CoRR abs/2305.07185 (2023) - [i168]Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023) - [i167]Mikel Artetxe, Vedanuj Goswami, Shruti Bhosale, Angela Fan, Luke Zettlemoyer:
Revisiting Machine Translation for Cross-lingual Classification. CoRR abs/2305.14240 (2023) - [i166]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. CoRR abs/2305.14251 (2023) - [i165]Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer:
QLoRA: Efficient Finetuning of Quantized LLMs. CoRR abs/2305.14314 (2023) - [i164]Chenglei Si, Weijia Shi, Chen Zhao, Luke Zettlemoyer, Jordan L. Boyd-Graber:
Mixture of Prompt Experts for Generalizable and Interpretable Question Answering. CoRR abs/2305.14628 (2023) - [i163]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. CoRR abs/2305.14739 (2023) - [i162]Ari Holtzman, Peter West, Luke Zettlemoyer:
Generative Models as a Complex Systems Science: How can we make sense of large language model behavior? CoRR abs/2308.00189 (2023) - [i161]Sewon Min, Suchin Gururangan, Eric Wallace, Hannaneh Hajishirzi, Noah A. Smith, Luke Zettlemoyer:
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore. CoRR abs/2308.04430 (2023) - [i160]Tianlu Wang, Ping Yu, Xiaoqing Ellen Tan, Sean O'Brien, Ramakanth Pasunuru, Jane Dwivedi-Yu, Olga Golovneva, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz:
Shepherd: A Critic for Language Model Generation. CoRR abs/2308.04592 (2023) - [i159]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. CoRR abs/2308.06259 (2023) - [i158]Benjamin Muller, Belen Alastruey, Prangthip Hansanti, Elahe Kalbassi, Christophe Ropers, Eric Michael Smith, Adina Williams, Luke Zettlemoyer, Pierre Andrews, Marta R. Costa-jussà:
The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages. CoRR abs/2308.16871 (2023) - [i157]Lucas Bandarkar, Davis Liang, Benjamin Muller, Mikel Artetxe, Satya Narayan Shukla, Donald Husa, Naman Goyal, Abhinandan Krishnan, Luke Zettlemoyer, Madian Khabsa:
The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants. CoRR abs/2308.16884 (2023) - [i156]Lili Yu, Bowen Shi, Ramakanth Pasunuru, Benjamin Muller, Olga Golovneva, Tianlu Wang, Arun Babu, Binh Tang, Brian Karrer, Shelly Sheynin, Candace Ross, Adam Polyak, Russell Howes, Vasu Sharma, Puxin Xu, Hovhannes Tamoyan, Oron Ashual, Uriel Singer, Shang-Wen Li, Susan Zhang, Richard James, Gargi Ghosh, Yaniv Taigman, Maryam Fazel-Zarandi, Asli Celikyilmaz, Luke Zettlemoyer, Armen Aghajanyan:
Scaling Autoregressive Multi-Modal Models: Pretraining and Instruction Tuning. CoRR abs/2309.02591 (2023) - [i155]Hu Xu, Saining Xie, Xiaoqing Ellen Tan, Po-Yao Huang, Russell Howes, Vasu Sharma, Shang-Wen Li, Gargi Ghosh, Luke Zettlemoyer, Christoph Feichtenhofer:
Demystifying CLIP Data. CoRR abs/2309.16671 (2023) - [i154]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. CoRR abs/2310.01352 (2023) - [i153]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. CoRR abs/2310.10638 (2023) - [i152]Joel Jang, Seungone Kim, Bill Yuchen Lin, Yizhong Wang, Jack Hessel, Luke Zettlemoyer, Hannaneh Hajishirzi, Yejin Choi, Prithviraj Ammanabrolu:
Personalized Soups: Personalized Large Language Model Alignment via Post-hoc Parameter Merging. CoRR abs/2310.11564 (2023) - [i151]Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer:
Detecting Pretraining Data from Large Language Models. CoRR abs/2310.16789 (2023) - [i150]Olga Golovneva, Sean O'Brien, Ramakanth Pasunuru, Tianlu Wang, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz:
PathFinder: Guided Search over Multi-Step Reasoning Paths. CoRR abs/2312.05180 (2023) - 2022
- [j10]Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni:
Multilingual Autoregressive Entity Linking. Trans. Assoc. Comput. Linguistics 10: 274-290 (2022) - [c169]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c168]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Lambert Mathias, Marzieh Saeidi, Veselin Stoyanov, Majid Yazdani:
Prompt-free and Efficient Few-shot Learning with Language Models. ACL (1) 2022: 3638-3652 - [c167]Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi:
FaVIQ: FAct Verification from Information-seeking Questions. ACL (1) 2022: 5154-5166 - [c166]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [c165]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. EMNLP 2022: 602-631 - [c164]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-Domain Language Modeling Dataset. EMNLP 2022: 964-975 - [c163]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. EMNLP 2022: 2562-2580 - [c162]Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, Luke Zettlemoyer:
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation. EMNLP (Findings) 2022: 2964-2984 - [c161]Weijia Shi, Julian Michael, Suchin Gururangan, Luke Zettlemoyer:
Nearest Neighbor Zero-Shot Inference. EMNLP 2022: 3254-3265 - [c160]Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang:
Natural Language to Code Translation with Execution. EMNLP 2022: 3533-3546 - [c159]Terra Blevins, Luke Zettlemoyer:
Language Contamination Helps Explains the Cross-lingual Capabilities of English Pretrained Models. EMNLP 2022: 3563-3574 - [c158]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models. EMNLP 2022: 3575-3590 - [c157]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022: 3781-3797 - [c156]Mikel Artetxe, Jingfei Du, Naman Goyal, Luke Zettlemoyer, Veselin Stoyanov:
On the Role of Bidirectionality in Language Model Pre-Training. EMNLP (Findings) 2022: 3973-3985 - [c155]Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona T. Diab, Veselin Stoyanov, Xian Li:
Few-shot Learning with Multilingual Generative Language Models. EMNLP 2022: 9019-9052 - [c154]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP 2022: 11048-11064 - [c153]Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giridharan Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeffrey Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, Veselin Stoyanov:
Efficient Large Scale Language Modeling with Mixtures of Experts. EMNLP 2022: 11699-11732 - [c152]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. ICLR 2022 - [c151]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c150]Eleftheria Briakou, Sida I. Wang, Luke Zettlemoyer, Marjan Ghazvininejad:
BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation. NAACL-HLT (Findings) 2022: 1469-1485 - [c149]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. NAACL-HLT 2022: 2791-2809 - [c148]Belinda Z. Li, Jane A. Yu, Madian Khabsa, Luke Zettlemoyer, Alon Y. Halevy, Jacob Andreas:
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. NAACL-HLT 2022: 4696-4715 - [c147]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c146]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [c145]Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, Armen Aghajanyan:
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models. NeurIPS 2022 - [c144]Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel:
Improving Policy Learning via Language Dynamics Distillation. NeurIPS 2022 - [c143]Paden Tomasello, Akshat Shrivastava, Daniel Lazar, Po-Chun Hsu, Duc Le, Adithya Sagar, Ali Elkahky, Jade Copet, Wei-Ning Hsu, Yossi Adi, Robin Algayres, Tu Anh Nguyen, Emmanuel Dupoux, Luke Zettlemoyer, Abdelrahman Mohamed:
Stop: A Dataset for Spoken Task Oriented Semantic Parsing. SLT 2022: 991-998 - [i149]Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir R. Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models. CoRR abs/2201.05966 (2022) - [i148]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i147]Suchin Gururangan, Dallas Card, Sarah K. Dreier, Emily K. Gade, Leroy Z. Wang, Zeyu Wang, Luke Zettlemoyer, Noah A. Smith:
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection. CoRR abs/2201.10474 (2022) - [i146]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i145]Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani:
PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models. CoRR abs/2204.01172 (2022) - [i144]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i143]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i142]Terra Blevins, Luke Zettlemoyer:
Language Contamination Explains the Cross-lingual Capabilities of English Pretrained Models. CoRR abs/2204.08110 (2022) - [i141]Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang:
Natural Language to Code Translation with Execution. CoRR abs/2204.11454 (2022) - [i140]Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona T. Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer:
OPT: Open Pre-trained Transformer Language Models. CoRR abs/2205.01068 (2022) - [i139]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - [i138]Kushal Tirumala, Aram H. Markosyan, Luke Zettlemoyer, Armen Aghajanyan:
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models. CoRR abs/2205.10770 (2022) - [i137]Mikel Artetxe, Jingfei Du, Naman Goyal, Luke Zettlemoyer, Ves Stoyanov:
On the Role of Bidirectionality in Language Model Pre-Training. CoRR abs/2205.11726 (2022) - [i136]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Analyzing the Mono- and Cross-Lingual Pretraining Dynamics of Multilingual Language Models. CoRR abs/2205.11758 (2022) - [i135]Suzanna Sia, Anton Belyy, Amjad Almahairi, Madian Khabsa, Luke Zettlemoyer, Lambert Mathias:
Logical Satisfiability of Counterfactuals for Faithful Explanations in NLI. CoRR abs/2205.12469 (2022) - [i134]Weijia Shi, Julian Michael, Suchin Gururangan, Luke Zettlemoyer:
Nearest Neighbor Zero-Shot Inference. CoRR abs/2205.13792 (2022) - [i133]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. CoRR abs/2206.03318 (2022) - [i132]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. CoRR abs/2206.10658 (2022) - [i131]Paden Tomasello, Akshat Shrivastava, Daniel Lazar, Po-Chun Hsu, Duc Le, Adithya Sagar, Ali Elkahky, Jade Copet, Wei-Ning Hsu, Yossef Mordechay, Robin Algayres, Tu Anh Nguyen, Emmanuel Dupoux, Luke Zettlemoyer, Abdelrahman Mohamed:
STOP: A dataset for Spoken Task Oriented Semantic Parsing. CoRR abs/2207.10643 (2022) - [i130]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i129]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i128]Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Selective Annotation Makes Language Models Better Few-Shot Learners. CoRR abs/2209.01975 (2022) - [i127]Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer:
Mega: Moving Average Equipped Gated Attention. CoRR abs/2209.10655 (2022) - [i126]Victor Zhong, Jesse Mu, Luke Zettlemoyer, Edward Grefenstette, Tim Rocktäschel:
Improving Policy Learning via Language Dynamics Distillation. CoRR abs/2210.00066 (2022) - [i125]Zhoujun Cheng, Tianbao Xie, Peng Shi, Chengzu Li, Rahul Nadkarni, Yushi Hu, Caiming Xiong, Dragomir Radev, Mari Ostendorf, Luke Zettlemoyer, Noah A. Smith, Tao Yu:
Binding Language Models in Symbolic Languages. CoRR abs/2210.02875 (2022) - [i124]Tanay Dixit, Bhargavi Paranjape, Hannaneh Hajishirzi, Luke Zettlemoyer:
CORE: A Retrieve-then-Edit Framework for Counterfactual Data Generation. CoRR abs/2210.04873 (2022) - [i123]Machel Reid, Victor Zhong, Suchin Gururangan, Luke Zettlemoyer:
M2D2: A Massively Multi-domain Language Modeling Dataset. CoRR abs/2210.07370 (2022) - [i122]Victor Zhong, Weijia Shi, Wen-tau Yih, Luke Zettlemoyer:
RoMQA: A Benchmark for Robust, Multi-evidence, Multi-answer Question Answering. CoRR abs/2210.14353 (2022) - [i121]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i120]Terra Blevins, Hila Gonen, Luke Zettlemoyer:
Prompting Language Models for Linguistic Structure. CoRR abs/2211.07830 (2022) - [i119]Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Scott Wen-tau Yih, Daniel Fried, Sida I. Wang, Tao Yu:
DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation. CoRR abs/2211.11501 (2022) - [i118]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i117]Xinyan Velocity Yu, Sewon Min, Luke Zettlemoyer, Hannaneh Hajishirzi:
CREPE: Open-Domain Question Answering with False Presuppositions. CoRR abs/2211.17257 (2022) - [i116]Bhargavi Paranjape, Pradeep Dasigi, Vivek Srikumar, Luke Zettlemoyer, Hannaneh Hajishirzi:
AGRO: Adversarial Discovery of Error-prone groups for Robust Optimization. CoRR abs/2212.00921 (2022) - [i115]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. CoRR abs/2212.01349 (2022) - [i114]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. CoRR abs/2212.02437 (2022) - [i113]Hila Gonen, Srini Iyer, Terra Blevins, Noah A. Smith, Luke Zettlemoyer:
Demystifying Prompts in Language Models via Perplexity Estimation. CoRR abs/2212.04037 (2022) - [i112]Olga Golovneva, Moya Chen, Spencer Poff, Martin Corredor, Luke Zettlemoyer, Maryam Fazel-Zarandi, Asli Celikyilmaz:
ROSCOE: A Suite of Metrics for Scoring Step-by-Step Reasoning. CoRR abs/2212.07919 (2022) - [i111]Tim Dettmers, Luke Zettlemoyer:
The case for 4-bit precision: k-bit Inference Scaling Laws. CoRR abs/2212.09720 (2022) - [i110]Hongjin Su, Weijia Shi, Jungo Kasai, Yizhong Wang, Yushi Hu, Mari Ostendorf, Wen-tau Yih, Noah A. Smith, Luke Zettlemoyer, Tao Yu:
One Embedder, Any Task: Instruction-Finetuned Text Embeddings. CoRR abs/2212.09741 (2022) - [i109]Mengzhou Xia, Mikel Artetxe, Chunting Zhou, Xi Victoria Lin, Ramakanth Pasunuru, Danqi Chen, Luke Zettlemoyer, Ves Stoyanov:
Training Trajectories of Language Models Across Scales. CoRR abs/2212.09803 (2022) - [i108]Xinxi Lyu, Sewon Min, Iz Beltagy, Luke Zettlemoyer, Hannaneh Hajishirzi:
Z-ICL: Zero-Shot In-Context Learning with Pseudo-Demonstrations. CoRR abs/2212.09865 (2022) - [i107]Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, Huan Sun:
Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters. CoRR abs/2212.10001 (2022) - [i106]Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer:
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too? CoRR abs/2212.10539 (2022) - [i105]Srinivasan Iyer, Xi Victoria Lin, Ramakanth Pasunuru, Todor Mihaylov, Daniel Simig, Ping Yu, Kurt Shuster, Tianlu Wang, Qing Liu, Punit Singh Koura, Xian Li, Brian O'Horo, Gabriel Pereyra, Jeff Wang, Christopher Dewan, Asli Celikyilmaz, Luke Zettlemoyer, Ves Stoyanov:
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization. CoRR abs/2212.12017 (2022) - 2021
- [c142]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Datasetfor Generating Entity Descriptions. ACL/IJCNLP (1) 2021: 415-427 - [c141]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. ACL/IJCNLP (1) 2021: 813-826 - [c140]Chunting Zhou, Graham Neubig, Jiatao Gu, Mona T. Diab, Francisco Guzmán, Luke Zettlemoyer, Marjan Ghazvininejad:
Detecting Hallucinated Content in Conditional Neural Sequence Generation. ACL/IJCNLP (Findings) 2021: 1393-1404 - [c139]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Hannaneh Hajishirzi, Luke Zettlemoyer:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. ACL/IJCNLP (Findings) 2021: 4179-4192 - [c138]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. ACL/IJCNLP (Findings) 2021: 4227-4239 - [c137]Julian Michael, Luke Zettlemoyer:
Inducing Semantic Roles Without Syntax. ACL/IJCNLP (Findings) 2021: 4427-4442 - [c136]Armen Aghajanyan, Sonal Gupta, Luke Zettlemoyer:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. ACL/IJCNLP (1) 2021: 7319-7328 - [c135]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRL 2021: 1691-1701 - [c134]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. EACL 2021: 455-465 - [c133]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. EMNLP (1) 2021: 5799-5811 - [c132]Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer:
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. EMNLP (1) 2021: 6787-6800 - [c131]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. EMNLP (1) 2021: 7038-7051 - [c130]Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta:
Better Fine-Tuning by Reducing Representational Collapse. ICLR 2021 - [c129]Asish Ghoshal, Xilun Chen, Sonal Gupta, Luke Zettlemoyer, Yashar Mehdad:
Learning Better Structured Representations Using Low-rank Adaptive Label Smoothing. ICLR 2021 - [c128]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c127]Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
DeLighT: Deep and Light-weight Transformer. ICLR 2021 - [c126]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c125]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. NeurIPS 2021: 2441-2453 - [c124]Victor Zhong, Austin W. Hanjie, Sida I. Wang, Karthik Narasimhan, Luke Zettlemoyer:
SILG: The Multi-domain Symbolic Interactive Language Grounding Benchmark. NeurIPS 2021: 21505-21519 - [e1]Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou:
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021. Association for Computational Linguistics 2021, ISBN 978-1-954085-46-6 [contents] - [i104]Haoyue Shi, Luke Zettlemoyer, Sida I. Wang:
Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment. CoRR abs/2101.00148 (2021) - [i103]Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta:
Muppet: Massive Multi-task Representations with Pre-Finetuning. CoRR abs/2101.11038 (2021) - [i102]Terra Blevins, Mandar Joshi, Luke Zettlemoyer:
FEWS: Large-Scale, Low-Shot Word Sense Disambiguation with the Dictionary. CoRR abs/2102.07983 (2021) - [i101]Nicola De Cao, Ledell Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, Fabio Petroni:
Multilingual Autoregressive Entity Linking. CoRR abs/2103.12528 (2021) - [i100]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i99]Ari Holtzman, Peter West, Vered Shwartz, Yejin Choi, Luke Zettlemoyer:
Surface Form Competition: Why the Highest Probability Answer Isn't Always Right. CoRR abs/2104.08315 (2021) - [i98]Hu Xu, Gargi Ghosh, Po-Yao Huang, Prahal Arora, Masoumeh Aminzadeh, Christoph Feichtenhofer, Florian Metze, Luke Zettlemoyer:
VLM: Task-agnostic Video-Language Model Pre-training for Video Understanding. CoRR abs/2105.09996 (2021) - [i97]Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, Luke Zettlemoyer:
Luna: Linear Unified Nested Attention. CoRR abs/2106.01540 (2021) - [i96]Weijia Shi, Mandar Joshi, Luke Zettlemoyer:
DESCGEN: A Distantly Supervised Dataset for Generating Abstractive Entity Descriptions. CoRR abs/2106.05365 (2021) - [i95]Bhargavi Paranjape, Julian Michael, Marjan Ghazvininejad, Luke Zettlemoyer, Hannaneh Hajishirzi:
Prompting Contrastive Explanations for Commonsense Reasoning Tasks. CoRR abs/2106.06823 (2021) - [i94]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i93]Jungsoo Park, Sewon Min, Jaewoo Kang, Luke Zettlemoyer, Hannaneh Hajishirzi:
FaVIQ: FAct Verification from Information-seeking Questions. CoRR abs/2107.02153 (2021) - [i92]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. CoRR abs/2107.06955 (2021) - [i91]Jesse Thomason, Mohit Shridhar, Yonatan Bisk, Chris Paxton, Luke Zettlemoyer:
Language Grounding with 3D Objects. CoRR abs/2107.12514 (2021) - [i90]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. CoRR abs/2108.04106 (2021) - [i89]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i88]Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, Christoph Feichtenhofer:
VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. CoRR abs/2109.14084 (2021) - [i87]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - [i86]Victor Zhong, Austin W. Hanjie, Sida I. Wang, Karthik Narasimhan, Luke Zettlemoyer:
SILG: The Multi-environment Symbolic Interactive Language Grounding Benchmark. CoRR abs/2110.10661 (2021) - [i85]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. CoRR abs/2110.15943 (2021) - [i84]Eleftheria Briakou, Sida I. Wang, Luke Zettlemoyer, Marjan Ghazvininejad:
BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation. CoRR abs/2111.06787 (2021) - [i83]Belinda Z. Li, Jane A. Yu, Madian Khabsa, Luke Zettlemoyer, Alon Y. Halevy, Jacob Andreas:
Quantifying Adaptability in Pre-trained Language Models with 500 Tasks. CoRR abs/2112.03204 (2021) - [i82]Darsh J. Shah, Sinong Wang, Han Fang, Hao Ma, Luke Zettlemoyer:
Reducing Target Group Bias in Hate Speech Detectors. CoRR abs/2112.03858 (2021) - [i81]Xi Victoria Lin, Todor Mihaylov, Mikel Artetxe, Tianlu Wang, Shuohui Chen, Daniel Simig, Myle Ott, Naman Goyal, Shruti Bhosale, Jingfei Du, Ramakanth Pasunuru, Sam Shleifer, Punit Singh Koura, Vishrav Chaudhary, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Zornitsa Kozareva, Mona T. Diab, Veselin Stoyanov, Xian Li:
Few-shot Learning with Multilingual Language Models. CoRR abs/2112.10668 (2021) - [i80]Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, Ves Stoyanov:
Efficient Large Scale Language Modeling with Mixtures of Experts. CoRR abs/2112.10684 (2021) - 2020
- [j9]Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy:
SpanBERT: Improving Pre-training by Representing and Predicting Spans. Trans. Assoc. Comput. Linguistics 8: 64-77 (2020) - [j8]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguistics 8: 726-742 (2020) - [c123]Terra Blevins, Luke Zettlemoyer:
Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders. ACL 2020: 1006-1017 - [c122]Nabil Hossain, Marjan Ghazvininejad, Luke Zettlemoyer:
Simple and Effective Retrieve-Edit-Rerank Text Generation. ACL 2020: 2532-2538 - [c121]Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov:
Emerging Cross-lingual Structure in Pretrained Language Models. ACL 2020: 6022-6034 - [c120]Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, Ido Dagan:
Controlled Crowdsourcing for High-Quality QA-SRL Annotation. ACL 2020: 7008-7013 - [c119]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c118]Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer:
Active Learning for Coreference Resolution using Discrete Annotation. ACL 2020: 8320-8331 - [c117]Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov:
Unsupervised Cross-lingual Representation Learning at Scale. ACL 2020: 8440-8451 - [c116]Ayal Klein, Jonathan Mamou, Valentina Pyatkin, Daniela Stepanov, Hangfeng He, Dan Roth, Luke Zettlemoyer, Ido Dagan:
QANom: Question-Answer driven SRL for Nominalizations. COLING 2020: 3069-3083 - [c115]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CVPR 2020: 10737-10746 - [c114]Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer:
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. EMNLP (1) 2020: 1938-1952 - [c113]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. EMNLP (Findings) 2020: 3031-3045 - [c112]Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, Sonal Gupta:
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing. EMNLP (1) 2020: 5090-5100 - [c111]Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer:
AmbigQA: Answering Ambiguous Open-domain Questions. EMNLP (1) 2020: 5783-5797 - [c110]Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer:
Scalable Zero-shot Entity Linking with Dense Entity Retrieval. EMNLP (1) 2020: 6397-6407 - [c109]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. EMNLP (1) 2020: 6869-6882 - [c108]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c107]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. ICML 2020: 3515-3523 - [c106]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. NeurIPS 2020 - [i79]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. CoRR abs/2001.08210 (2020) - [i78]Marjan Ghazvininejad, Omer Levy, Luke Zettlemoyer:
Semi-Autoregressive Training Improves Mask-Predict Decoding. CoRR abs/2001.08785 (2020) - [i77]Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy:
Aligned Cross Entropy for Non-Autoregressive Machine Translation. CoRR abs/2004.01655 (2020) - [i76]Sewon Min, Julian Michael, Hannaneh Hajishirzi, Luke Zettlemoyer:
AmbigQA: Answering Ambiguous Open-domain Questions. CoRR abs/2004.10645 (2020) - [i75]Belinda Z. Li, Gabriel Stanovsky, Luke Zettlemoyer:
Active Learning for Coreference Resolution using Discrete Annotation. CoRR abs/2004.13671 (2020) - [i74]Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, Luke Zettlemoyer:
An Information Bottleneck Approach for Controlling Conciseness in Rationale Extraction. CoRR abs/2005.00652 (2020) - [i73]Terra Blevins, Luke Zettlemoyer:
Moving Down the Long Tail of Word Sense Disambiguation with Gloss-Informed Biencoders. CoRR abs/2005.02590 (2020) - [i72]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida I. Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. CoRR abs/2006.15020 (2020) - [i71]Sachin Mehta, Marjan Ghazvininejad, Srinivasan Iyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
DeLighT: Very Deep and Light-weight Transformer. CoRR abs/2008.00623 (2020) - [i70]Armen Aghajanyan, Akshat Shrivastava, Anchit Gupta, Naman Goyal, Luke Zettlemoyer, Sonal Gupta:
Better Fine-Tuning by Reducing Representational Collapse. CoRR abs/2008.03156 (2020) - [i69]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. CoRR abs/2009.07396 (2020) - [i68]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. CoRR abs/2010.00710 (2020) - [i67]Xilun Chen, Asish Ghoshal, Yashar Mehdad, Luke Zettlemoyer, Sonal Gupta:
Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic Parsing. CoRR abs/2010.03546 (2020) - [i66]Chunting Zhou, Jiatao Gu, Mona T. Diab, Paco Guzman, Luke Zettlemoyer, Marjan Ghazvininejad:
Detecting Hallucinated Content in Conditional Neural Sequence Generation. CoRR abs/2011.02593 (2020) - [i65]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Learning to Model and Ignore Dataset Bias with Mixed Capacity Ensembles. CoRR abs/2011.03856 (2020) - [i64]Armen Aghajanyan, Luke Zettlemoyer, Sonal Gupta:
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. CoRR abs/2012.13255 (2020)
2010 – 2019
- 2019
- [c105]Terra Blevins, Luke Zettlemoyer:
Better Character Language Modeling through Morphology. ACL (1) 2019: 1606-1613 - [c104]Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer:
Evaluating Gender Bias in Machine Translation. ACL (1) 2019: 1679-1684 - [c103]Victor Zhong, Luke Zettlemoyer:
E3: Entailment-driven Extracting and Editing for Conversational Machine Reading. ACL (1) 2019: 2310-2320 - [c102]Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer:
Compositional Questions Do Not Necessitate Multi-hop Reasoning. ACL (1) 2019: 4249-4257 - [c101]Fei Liu, Luke Zettlemoyer, Jacob Eisenstein:
The Referential Reader: A Recurrent Entity Network for Anaphora Resolution. ACL (1) 2019: 5918-5925 - [c100]Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi:
Multi-hop Reading Comprehension through Question Decomposition and Rescoring. ACL (1) 2019: 6097-6109 - [c99]Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:
Vision-and-Dialog Navigation. CoRL 2019: 394-406 - [c98]Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, Luke Zettlemoyer:
Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog. EMNLP/IJCNLP (1) 2019: 1520-1526 - [c97]Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke Zettlemoyer:
A Discrete Hard EM Approach for Weakly Supervised Question Answering. EMNLP/IJCNLP (1) 2019: 2851-2864 - [c96]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. EMNLP/IJCNLP (1) 2019: 4067-4080 - [c95]Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli:
Cloze-driven Pretraining of Self-attention Networks. EMNLP/IJCNLP (1) 2019: 5359-5368 - [c94]Srinivasan Iyer, Alvin Cheung, Luke Zettlemoyer:
Learning Programmatic Idioms for Scalable Semantic Parsing. EMNLP/IJCNLP (1) 2019: 5425-5434 - [c93]Rajas Agashe, Srinivasan Iyer, Luke Zettlemoyer:
JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation. EMNLP/IJCNLP (1) 2019: 5435-5445 - [c92]Mandar Joshi, Omer Levy, Luke Zettlemoyer, Daniel S. Weld:
BERT for Coreference Resolution: Baselines and Analysis. EMNLP/IJCNLP (1) 2019: 5802-5807 - [c91]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Mask-Predict: Parallel Decoding of Conditional Masked Language Models. EMNLP/IJCNLP (1) 2019: 6111-6120 - [c90]Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard H. Hovy:
Iterative Search for Weakly Supervised Semantic Parsing. NAACL-HLT (1) 2019: 2669-2680 - [c89]Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference. NAACL-HLT (1) 2019: 3597-3608 - [i63]Fei Liu, Luke Zettlemoyer, Jacob Eisenstein:
The Referential Reader: A Recurrent Entity Network for Anaphora Resolution. CoRR abs/1902.01541 (2019) - [i62]Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer:
Improving Semantic Parsing for Task Oriented Dialog. CoRR abs/1902.06000 (2019) - [i61]Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke Zettlemoyer, Michael Auli:
Cloze-driven Pretraining of Self-attention Networks. CoRR abs/1903.07785 (2019) - [i60]Srinivasan Iyer, Alvin Cheung, Luke Zettlemoyer:
Learning Programmatic Idioms for Scalable Semantic Parsing. CoRR abs/1904.09086 (2019) - [i59]Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer:
Constant-Time Machine Translation with Conditional Masked Language Models. CoRR abs/1904.09324 (2019) - [i58]Abdelrahman Mohamed, Dmytro Okhonko, Luke Zettlemoyer:
Transformers with convolutional context for ASR. CoRR abs/1904.11660 (2019) - [i57]Gabriel Stanovsky, Noah A. Smith, Luke Zettlemoyer:
Evaluating Gender Bias in Machine Translation. CoRR abs/1906.00591 (2019) - [i56]Terra Blevins, Luke Zettlemoyer:
Better Character Language Modeling Through Morphology. CoRR abs/1906.01037 (2019) - [i55]Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, Luke Zettlemoyer:
Compositional Questions Do Not Necessitate Multi-hop Reasoning. CoRR abs/1906.02900 (2019) - [i54]Sewon Min, Victor Zhong, Luke Zettlemoyer, Hannaneh Hajishirzi:
Multi-hop Reading Comprehension through Question Decomposition and Rescoring. CoRR abs/1906.02916 (2019) - [i53]Victor Zhong, Luke Zettlemoyer:
E3: Entailment-driven Extracting and Editing for Conversational Machine Reading. CoRR abs/1906.05373 (2019) - [i52]Tim Dettmers, Luke Zettlemoyer:
Sparse Networks from Scratch: Faster Training without Losing Performance. CoRR abs/1907.04840 (2019) - [i51]Jesse Thomason, Michael Murray, Maya Cakmak, Luke Zettlemoyer:
Vision-and-Dialog Navigation. CoRR abs/1907.04957 (2019) - [i50]Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, Omer Levy:
SpanBERT: Improving Pre-training by Representing and Predicting Spans. CoRR abs/1907.10529 (2019) - [i49]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov:
RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019) - [i48]Mandar Joshi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
BERT for Coreference Resolution: Baselines and Analysis. CoRR abs/1908.09091 (2019) - [i47]Christopher Clark, Mark Yatskar, Luke Zettlemoyer:
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. CoRR abs/1909.03683 (2019) - [i46]Sewon Min, Danqi Chen, Hannaneh Hajishirzi, Luke Zettlemoyer:
A Discrete Hard EM Approach for Weakly Supervised Question Answering. CoRR abs/1909.04849 (2019) - [i45]Rajas Agashe, Srinivasan Iyer, Luke Zettlemoyer:
JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation. CoRR abs/1910.02216 (2019) - [i44]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR abs/1910.13461 (2019) - [i43]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. CoRR abs/1911.00172 (2019) - [i42]Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov:
Emerging Cross-lingual Structure in Pretrained Language Models. CoRR abs/1911.01464 (2019) - [i41]Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov:
Unsupervised Cross-lingual Representation Learning at Scale. CoRR abs/1911.02116 (2019) - [i40]Paul Roit, Ayal Klein, Daniela Stepanov, Jonathan Mamou, Julian Michael, Gabriel Stanovsky, Luke Zettlemoyer, Ido Dagan:
Crowdsourcing a High-Quality Gold Standard for QA-SRL. CoRR abs/1911.03243 (2019) - [i39]Siddharth Dalmia, Abdelrahman Mohamed, Mike Lewis, Florian Metze, Luke Zettlemoyer:
Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models. CoRR abs/1911.03782 (2019) - [i38]Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer:
Zero-shot Entity Linking with Dense Entity Retrieval. CoRR abs/1911.03814 (2019) - [i37]Sewon Min, Danqi Chen, Luke Zettlemoyer, Hannaneh Hajishirzi:
Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering. CoRR abs/1911.03868 (2019) - [i36]Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox:
ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks. CoRR abs/1912.01734 (2019) - 2018
- [j7]Lucy Simko, Luke Zettlemoyer, Tadayoshi Kohno:
Recognizing and Imitating Programmer Style: Adversaries in Program Authorship Attribution. Proc. Priv. Enhancing Technol. 2018(1): 127-144 (2018) - [c88]Terra Blevins, Omer Levy, Luke Zettlemoyer:
Deep RNNs Encode Soft Hierarchical Syntax. ACL (2) 2018: 14-19 - [c87]Matt Gardner, Pradeep Dasigi, Srinivasan Iyer, Alane Suhr, Luke Zettlemoyer:
Neural Semantic Parsing. ACL (5) 2018: 17-18 - [c86]Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer:
Ultra-Fine Entity Typing. ACL (1) 2018: 87-96 - [c85]Luheng He, Kenton Lee, Omer Levy, Luke Zettlemoyer:
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling. ACL (2) 2018: 364-369 - [c84]Omer Levy, Kenton Lee, Nicholas FitzGerald, Luke Zettlemoyer:
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum. ACL (2) 2018: 732-739 - [c83]Nicholas FitzGerald, Julian Michael, Luheng He, Luke Zettlemoyer:
Large-Scale QA-SRL Parsing. ACL (1) 2018: 2051-2060 - [c82]Michael Petrochuk, Luke Zettlemoyer:
SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach. EMNLP 2018: 554-558 - [c81]Ge Gao, Eunsol Choi, Yejin Choi, Luke Zettlemoyer:
Neural Metaphor Detection in Context. EMNLP 2018: 607-613 - [c80]Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, Wen-tau Yih:
Dissecting Contextual Word Embeddings: Architecture and Representation. EMNLP 2018: 1499-1509 - [c79]Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer:
Mapping Language to Code in Programmatic Context. EMNLP 2018: 1643-1652 - [c78]Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer:
QuAC: Question Answering in Context. EMNLP 2018: 2174-2184 - [c77]Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, Noah A. Smith:
Syntactic Scaffolds for Semantic Structures. EMNLP 2018: 3772-3782 - [c76]Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, Michael D. Ernst:
NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System. LREC 2018 - [c75]Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, Luke Zettlemoyer:
Crowdsourcing Question-Answer Meaning Representations. NAACL-HLT (2) 2018: 560-568 - [c74]Kenton Lee, Luheng He, Luke Zettlemoyer:
Higher-Order Coreference Resolution with Coarse-to-Fine Inference. NAACL-HLT (2) 2018: 687-692 - [c73]Gabriel Stanovsky, Julian Michael, Luke Zettlemoyer, Ido Dagan:
Supervised Open Information Extraction. NAACL-HLT 2018: 885-895 - [c72]Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer:
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. NAACL-HLT 2018: 1875-1885 - [c71]Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer:
Deep Contextualized Word Representations. NAACL-HLT 2018: 2227-2237 - [i35]Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer:
Deep contextualized word representations. CoRR abs/1802.05365 (2018) - [i34]Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, Michael D. Ernst:
NL2Bash: A Corpus and Semantic Parser for Natural Language Interface to the Linux Operating System. CoRR abs/1802.08979 (2018) - [i33]Matt Gardner, Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson F. Liu, Matthew E. Peters, Michael Schmitz, Luke Zettlemoyer:
AllenNLP: A Deep Semantic Natural Language Processing Platform. CoRR abs/1803.07640 (2018) - [i32]Kenton Lee, Luheng He, Luke Zettlemoyer:
Higher-order Coreference Resolution with Coarse-to-fine Inference. CoRR abs/1804.05392 (2018) - [i31]Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer:
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks. CoRR abs/1804.06059 (2018) - [i30]Michael Petrochuk, Luke Zettlemoyer:
SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach. CoRR abs/1804.08798 (2018) - [i29]Omer Levy, Kenton Lee, Nicholas FitzGerald, Luke Zettlemoyer:
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum. CoRR abs/1805.03716 (2018) - [i28]Terra Blevins, Omer Levy, Luke Zettlemoyer:
Deep RNNs Encode Soft Hierarchical Syntax. CoRR abs/1805.04218 (2018) - [i27]Luheng He, Kenton Lee, Omer Levy, Luke Zettlemoyer:
Jointly Predicting Predicates and Arguments in Neural Semantic Role Labeling. CoRR abs/1805.04787 (2018) - [i26]Nicholas FitzGerald, Julian Michael, Luheng He, Luke Zettlemoyer:
Large-Scale QA-SRL Parsing. CoRR abs/1805.05377 (2018) - [i25]Eunsol Choi, Omer Levy, Yejin Choi, Luke Zettlemoyer:
Ultra-Fine Entity Typing. CoRR abs/1807.04905 (2018) - [i24]Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, Luke Zettlemoyer:
QuAC : Question Answering in Context. CoRR abs/1808.07036 (2018) - [i23]Matthew E. Peters, Mark Neumann, Luke Zettlemoyer, Wen-tau Yih:
Dissecting Contextual Word Embeddings: Architecture and Representation. CoRR abs/1808.08949 (2018) - [i22]Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer:
Mapping Language to Code in Programmatic Context. CoRR abs/1808.09588 (2018) - [i21]Ge Gao, Eunsol Choi, Yejin Choi, Luke Zettlemoyer:
Neural Metaphor Detection in Context. CoRR abs/1808.09653 (2018) - [i20]Swabha Swayamdipta, Sam Thomson, Kenton Lee, Luke Zettlemoyer, Chris Dyer, Noah A. Smith:
Syntactic Scaffolds for Semantic Structures. CoRR abs/1808.10485 (2018) - [i19]Mandar Joshi, Eunsol Choi, Omer Levy, Daniel S. Weld, Luke Zettlemoyer:
pair2vec: Compositional Word-Pair Embeddings for Cross-Sentence Inference. CoRR abs/1810.08854 (2018) - 2017
- [c70]Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer:
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. ACL (1) 2017: 146-157 - [c69]Luheng He, Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Deep Semantic Role Labeling: What Works and What's Next. ACL (1) 2017: 473-483 - [c68]Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer:
Learning a Neural Semantic Parser from User Feedback. ACL (1) 2017: 963-973 - [c67]Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer:
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. ACL (1) 2017: 1601-1611 - [c66]Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer:
Zero-Shot Relation Extraction via Reading Comprehension. CoNLL 2017: 333-342 - [c65]Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi:
Commonly Uncommon: Semantic Sparsity in Situation Recognition. CVPR 2017: 6335-6344 - [c64]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. EMNLP 2017: 188-197 - [i18]Ioannis Konstas, Srinivasan Iyer, Mark Yatskar, Yejin Choi, Luke Zettlemoyer:
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation. CoRR abs/1704.08381 (2017) - [i17]Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Jayant Krishnamurthy, Luke Zettlemoyer:
Learning a Neural Semantic Parser from User Feedback. CoRR abs/1704.08760 (2017) - [i16]Mandar Joshi, Eunsol Choi, Daniel S. Weld, Luke Zettlemoyer:
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension. CoRR abs/1705.03551 (2017) - [i15]Kenton Lee, Omer Levy, Luke Zettlemoyer:
Recurrent Additive Networks. CoRR abs/1705.07393 (2017) - [i14]Omer Levy, Minjoon Seo, Eunsol Choi, Luke Zettlemoyer:
Zero-Shot Relation Extraction via Reading Comprehension. CoRR abs/1706.04115 (2017) - [i13]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. CoRR abs/1707.07045 (2017) - [i12]Julian Michael, Gabriel Stanovsky, Luheng He, Ido Dagan, Luke Zettlemoyer:
Crowdsourcing Question-Answer Meaning Representations. CoRR abs/1711.05885 (2017) - 2016
- [c63]Eunsol Choi, Hannah Rashkin, Luke Zettlemoyer, Yejin Choi:
Document-level Sentiment Inference with Social, Faction, and Discourse Context. ACL (1) 2016 - [c62]Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer:
Summarizing Source Code using a Neural Attention Model. ACL (1) 2016 - [c61]Mark Yatskar, Luke Zettlemoyer, Ali Farhadi:
Situation Recognition: Visual Semantic Role Labeling for Image Understanding. CVPR 2016: 5534-5542 - [c60]Chloé Kiddon, Luke Zettlemoyer, Yejin Choi:
Globally Coherent Text Generation with Neural Checklist Models. EMNLP 2016: 329-339 - [c59]Rik Koncel-Kedziorski, Ioannis Konstas, Luke Zettlemoyer, Hannaneh Hajishirzi:
A Theme-Rewriting Approach for Generating Algebra Word Problems. EMNLP 2016: 1617-1628 - [c58]Luheng He, Julian Michael, Mike Lewis, Luke Zettlemoyer:
Human-in-the-Loop Parsing. EMNLP 2016: 2337-2342 - [c57]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. EMNLP 2016: 2366-2376 - [c56]Mike Lewis, Kenton Lee, Luke Zettlemoyer:
LSTM CCG Parsing. HLT-NAACL 2016: 221-231 - [i11]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. CoRR abs/1607.01432 (2016) - [i10]Rik Koncel-Kedziorski, Ioannis Konstas, Luke Zettlemoyer, Hannaneh Hajishirzi:
A Theme-Rewriting Approach for Generating Algebra Word Problems. CoRR abs/1610.06210 (2016) - [i9]Mark Yatskar, Vicente Ordonez, Luke Zettlemoyer, Ali Farhadi:
Commonly Uncommon: Semantic Sparsity in Situation Recognition. CoRR abs/1612.00901 (2016) - 2015
- [c55]Eunsol Choi, Tom Kwiatkowski, Luke Zettlemoyer:
Scalable Semantic Parsing with Partial Ontologies. ACL (1) 2015: 1311-1320 - [c54]Luheng He, Mike Lewis, Luke Zettlemoyer:
Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language. EMNLP 2015: 643-653 - [c53]Chloé Kiddon, Ganesa Thandavam Ponnuraj, Luke Zettlemoyer, Yejin Choi:
Mise en Place: Unsupervised Interpretation of Instructional Recipes. EMNLP 2015: 982-992 - [c52]Mike Lewis, Luheng He, Luke Zettlemoyer:
Joint A* CCG Parsing and Semantic Role Labelling. EMNLP 2015: 1444-1454 - [c51]Kenton Lee, Yoav Artzi, Yejin Choi, Luke Zettlemoyer:
Event Detection and Factuality Assessment with Non-Expert Supervision. EMNLP 2015: 1643-1648 - [c50]Yoav Artzi, Kenton Lee, Luke Zettlemoyer:
Broad-coverage CCG Semantic Parsing with AMR. EMNLP 2015: 1699-1710 - [c49]Maxwell Forbes, Rajesh P. N. Rao, Luke Zettlemoyer, Maya Cakmak:
Robot Programming by Demonstration with situated spatial language understanding. ICRA 2015: 2014-2020 - [c48]Oleksandr Polozov, Eleanor O'Rourke, Adam M. Smith, Luke Zettlemoyer, Sumit Gulwani, Zoran Popovic:
Personalized Mathematical Word Problem Generation. IJCAI 2015: 381-388 - [i8]Raphael Hoffmann, Luke Zettlemoyer, Daniel S. Weld:
Extreme Extraction: Only One Hour per Relation. CoRR abs/1506.06418 (2015) - 2014
- [j6]Antoine Bordes, Léon Bottou, Ronan Collobert, Dan Roth, Jason Weston, Luke Zettlemoyer:
Introduction to the special issue on learning semantics. Mach. Learn. 94(2): 127-131 (2014) - [c47]Cynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, Dieter Fox:
Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions. AAAI 2014: 2556-2563 - [c46]Nate Kushman, Luke Zettlemoyer, Regina Barzilay, Yoav Artzi:
Learning to Automatically Solve Algebra Word Problems. ACL (1) 2014: 271-281 - [c45]Kenton Lee, Yoav Artzi, Jesse Dodge, Luke Zettlemoyer:
Context-dependent Semantic Parsing for Time Expressions. ACL (1) 2014: 1437-1447 - [c44]Adrienne X. Wang, Tom Kwiatkowski, Luke Zettlemoyer:
Morpho-syntactic Lexical Generalization for CCG Semantic Parsing. EMNLP 2014: 1284-1295 - [c43]Anthony Fader, Luke Zettlemoyer, Oren Etzioni:
Open question answering over curated and extracted knowledge bases. KDD 2014: 1156-1165 - [c42]Mark Yatskar, Michel Galley, Lucy Vanderwende, Luke Zettlemoyer:
See No Evil, Say No Evil: Description Generation from Densely Labeled Images. *SEM@COLING 2014: 110-120 - 2013
- [j5]Yoav Artzi, Luke Zettlemoyer:
Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions. Trans. Assoc. Comput. Linguistics 1: 49-62 (2013) - [j4]Alan Ritter, Luke Zettlemoyer, Mausam, Oren Etzioni:
Modeling Missing Data in Distant Supervision for Information Extraction. Trans. Assoc. Comput. Linguistics 1: 367-378 (2013) - [j3]Bryan C. Russell, Ricardo Martin-Brualla, Daniel J. Butler, Steven M. Seitz, Luke Zettlemoyer:
3D Wikipedia: using online text to automatically label and navigate reconstructed geometry. ACM Trans. Graph. 32(6): 193:1-193:10 (2013) - [c41]Yoav Artzi, Nicholas FitzGerald, Luke Zettlemoyer:
Semantic Parsing with Combinatory Categorial Grammars. ACL (Tutorial Abstracts) 2013: 2 - [c40]Anthony Fader, Luke Zettlemoyer, Oren Etzioni:
Paraphrase-Driven Learning for Open Question Answering. ACL (1) 2013: 1608-1618 - [c39]Svitlana Volkova, Pallavi Choudhury, Chris Quirk, Bill Dolan, Luke Zettlemoyer:
Lightly Supervised Learning of Procedural Dialog Systems. ACL (1) 2013: 1669-1679 - [c38]Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, Luke Zettlemoyer:
Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves. EMNLP 2013: 289-299 - [c37]Grace Muzny, Luke Zettlemoyer:
Automatic Idiom Identification in Wiktionary. EMNLP 2013: 1417-1421 - [c36]Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, Luke Zettlemoyer:
Scaling Semantic Parsers with On-the-Fly Ontology Matching. EMNLP 2013: 1545-1556 - [c35]Nicholas FitzGerald, Yoav Artzi, Luke Zettlemoyer:
Learning Distributions over Logical Forms for Referring Expression Generation. EMNLP 2013: 1914-1925 - [c34]Necip Fazil Ayan, Arindam Mandal, Michael W. Frandsen, Jing Zheng, Peter Blasco, Andreas Kathol, Frédéric Béchet, Benoît Favre, Alex Marin, Tom Kwiatkowski, Mari Ostendorf, Luke Zettlemoyer, Philipp Salletmayr, Julia Hirschberg, Svetlana Stoyanchev:
"Can you give me another word for hyperbaric?": Improving speech translation using targeted clarification questions. ICASSP 2013: 8391-8395 - [c33]Mark Yatskar, Svitlana Volkova, Asli Celikyilmaz, Bill Dolan, Luke Zettlemoyer:
Learning to Relate Literal and Sentimental Descriptions of Visual Properties. HLT-NAACL 2013: 416-425 - [i7]Yoav Artzi, Luke Zettlemoyer:
UW SPF: The University of Washington Semantic Parsing Framework. CoRR abs/1311.3011 (2013) - 2012
- [c32]Einat Minkov, Luke Zettlemoyer:
Discriminative Learning for Joint Template Filling. ACL (1) 2012: 845-853 - [c31]Tom Kwiatkowski, Sharon Goldwater, Luke Zettlemoyer, Mark Steedman:
A Probabilistic Model of Syntactic and Semantic Acquisition from Child-Directed Utterances and their Meanings. EACL 2012: 234-244 - [c30]Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, Dieter Fox:
A Joint Model of Language and Perception for Grounded Attribute Learning. ICML 2012 - [c29]Cynthia Matuszek, Evan Herbst, Luke Zettlemoyer, Dieter Fox:
Learning to Parse Natural Language Commands to a Robot Control System. ISER 2012: 403-415 - [c28]Alex Marin, Tom Kwiatkowski, Mari Ostendorf, Luke Zettlemoyer:
Using syntactic and confusion network structure for out-of-vocabulary word detection. SLT 2012: 159-164 - [c27]Kira Mourão, Luke Zettlemoyer, Ronald P. A. Petrick, Mark Steedman:
Learning STRIPS Operators from Noisy and Incomplete Observations. UAI 2012: 614-623 - [c26]Jeff Huang, Oren Etzioni, Luke Zettlemoyer, Kevin Clark, Christian Lee:
RevMiner: an extractive interface for navigating reviews on a smartphone. UIST 2012: 3-12 - [i6]Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Probabilistic Relational Dynamics for Multiple Tasks. CoRR abs/1206.5249 (2012) - [i5]Luke S. Zettlemoyer, Michael Collins:
Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. CoRR abs/1207.1420 (2012) - [i4]Kira Mourão, Luke Zettlemoyer, Ronald P. A. Petrick, Mark Steedman:
Learning STRIPS Operators from Noisy and Incomplete Observations. CoRR abs/1210.4889 (2012) - 2011
- [c25]Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, Daniel S. Weld:
Knowledge-Based Weak Supervision for Information Extraction of Overlapping Relations. ACL 2011: 541-550 - [c24]Yoav Artzi, Luke Zettlemoyer:
Bootstrapping Semantic Parsers from Conversations. EMNLP 2011: 421-432 - [c23]Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, Mark Steedman:
Lexical Generalization in CCG Grammar Induction for Semantic Parsing. EMNLP 2011: 1512-1523 - [i3]Leslie Pack Kaelbling, Hanna M. Pasula, Luke S. Zettlemoyer:
Learning Symbolic Models of Stochastic Domains. CoRR abs/1110.2211 (2011) - 2010
- [c22]S. R. K. Branavan, Luke Zettlemoyer, Regina Barzilay:
Reading between the Lines: Learning to Map High-Level Instructions to Commands. ACL 2010: 1268-1277 - [c21]Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, Mark Steedman:
Inducing Probabilistic CCG Grammars from Logical Form with Higher-Order Unification. EMNLP 2010: 1223-1233
2000 – 2009
- 2009
- [b1]Luke S. Zettlemoyer:
Learning to map sentences to logical form. Massachusetts Institute of Technology, Cambridge, MA, USA, 2009 - [c20]S. R. K. Branavan, Harr Chen, Luke Zettlemoyer, Regina Barzilay:
Reinforcement Learning for Mapping Instructions to Actions. ACL/IJCNLP 2009: 82-90 - [c19]Luke Zettlemoyer, Michael Collins:
Learning Context-Dependent Mappings from Sentences to Logical Form. ACL/IJCNLP 2009: 976-984 - 2008
- [c18]Brian Milch, Luke S. Zettlemoyer, Kristian Kersting, Michael Haimes, Leslie Pack Kaelbling:
Lifted Probabilistic Inference with Counting Formulas. AAAI 2008: 1062-1068 - [c17]Wei Lu, Hwee Tou Ng, Wee Sun Lee, Luke S. Zettlemoyer:
A Generative Model for Parsing Natural Language to Meaning Representations. EMNLP 2008: 783-792 - [c16]Luke S. Zettlemoyer, Brian Milch, Leslie Pack Kaelbling:
Multi-Agent Filtering with Infinitely Nested Beliefs. NIPS 2008: 1905-1912 - 2007
- [j2]Hanna M. Pasula, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Symbolic Models of Stochastic Domains. J. Artif. Intell. Res. 29: 309-352 (2007) - [c15]Luke Zettlemoyer, Michael Collins:
Online Learning of Relaxed CCG Grammars for Parsing to Logical Form. EMNLP-CoNLL 2007: 678-687 - [c14]Luke Zettlemoyer, Robert Moore:
Selective Phrase Pair Extraction for Improved Statistical Machine Translation. HLT-NAACL (Short Papers) 2007: 209-212 - [c13]Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Probabilistic Relational Dynamics for Multiple Tasks. UAI 2007: 83-92 - [i2]Ashwin Deshpande, Brian Milch, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Probabilistic Relational Dynamics for Multiple Tasks. Probabilistic, Logical and Relational Learning - A Further Synthesis 2007 - [i1]Luke S. Zettlemoyer, Hanna M. Pasula, Leslie Pack Kaelbling:
Logical Particle Filtering. Probabilistic, Logical and Relational Learning - A Further Synthesis 2007 - 2005
- [c12]Luke S. Zettlemoyer, Hanna Pasula, Leslie Pack Kaelbling:
Learning Planning Rules in Noisy Stochastic Worlds. AAAI 2005: 911-918 - [c11]Luke S. Zettlemoyer, Michael Collins:
Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. UAI 2005: 658-666 - 2004
- [c10]Hanna Pasula, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Probabilistic Relational Planning Rules. ICAPS 2004: 73-82 - [c9]Hanna Pasula, Luke S. Zettlemoyer, Leslie Pack Kaelbling:
Learning Probabilistic Relational Planning Rules. KR 2004: 683-691 - 2001
- [p1]Robert St. Amant, Henry Lieberman, Richard Potter, Luke Zettlemoyer:
Visual Generalization in Programming by Example. Your Wish is My Command 2001: 371-385 - 2000
- [j1]Robert St. Amant, Henry Lieberman, Richard Potter, Luke Zettlemoyer:
Visual Generalization in Programming by Example. Commun. ACM 43(3): 107-114 (2000) - [c8]Robert St. Amant, Luke S. Zettlemoyer:
User Interface Softbots. AAAI/IAAI 2000: 1129-1130 - [c7]Robert St. Amant, Luke S. Zettlemoyer:
The user interface as an agent environment. Agents 2000: 483-490
1990 – 1999
- 1999
- [c6]James C. Lester, Luke S. Zettlemoyer, Joël P. Grégoire, William H. Bares:
Explanatory Lifelike Evatars: Performing User-Centered Tasks in 3D Learning Environments. Agents 1999: 24-31 - [c5]Luke S. Zettlemoyer, Robert St. Amant:
A Visual Medium for Programmatic Control of Interactive Applications. CHI 1999: 199-206 - [c4]Martin S. Dulberg, Robert St. Amant, Luke S. Zettlemoyer:
An Imprecise Mouse Gesture for the Fast Activation of Controls. INTERACT 1999: 375-382 - [c3]Luke S. Zettlemoyer, Robert St. Amant, Martin S. Dulberg:
IBOTS: Agent Control Through the User Interface. IUI 1999: 31-37 - 1998
- [c2]William H. Bares, Luke S. Zettlemoyer, James C. Lester:
Habitable 3D Learning Environments for Situated Learning. Intelligent Tutoring Systems 1998: 76-85 - [c1]William H. Bares, Luke S. Zettlemoyer, Dennis W. Rodriguez, James C. Lester:
Task-sensitive Cinematography Interfaces for Interactive 3D Learning Environments. IUI 1998: 81-88
Coauthor Index
aka: Bernie Huang
aka: Ves Stoyanov
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-04 20:15 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint