default search action
Mike Lewis
Person information
Other persons with a similar name
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c70]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Omer Levy, Luke Zettlemoyer, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. ICLR 2024 - [c69]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Richard James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. ICLR 2024 - [c68]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Wen-tau Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. ICLR 2024 - [c67]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. ICLR 2024 - [c66]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. NAACL (Short Papers) 2024: 783-791 - [c65]Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma:
Effective Long-Context Scaling of Foundation Models. NAACL-HLT 2024: 4643-4663 - [c64]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Richard James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. NAACL-HLT 2024: 8371-8384 - [i70]Zexuan Zhong, Mengzhou Xia, Danqi Chen, Mike Lewis:
Lory: Fully Differentiable Mixture-of-Experts for Autoregressive Language Model Pre-training. CoRR abs/2405.03133 (2024) - [i69]Xi Victoria Lin, Akshat Shrivastava, Liang Luo, Srinivasan Iyer, Mike Lewis, Gargi Ghosh, Luke Zettlemoyer, Armen Aghajanyan:
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts. CoRR abs/2407.21770 (2024) - [i68]Ming Zhong, Aston Zhang, Xuewei Wang, Rui Hou, Wenhan Xiong, Chenguang Zhu, Zhengxing Chen, Liang Tan, Chloe Bi, Mike Lewis, Sravya Popuri, Sharan Narang, Melanie Kambadur, Dhruv Mahajan, Sergey Edunov, Jiawei Han, Laurens van der Maaten:
Law of the Weakest Link: Cross Capabilities of Large Language Models. CoRR abs/2409.19951 (2024) - 2023
- [j7]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. Trans. Assoc. Comput. Linguistics 11: 600-616 (2023) - [j6]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. IEEE ACM Trans. Audio Speech Lang. Process. 31: 3112-3126 (2023) - [c63]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. ACL (Findings) 2023: 2097-2118 - [c62]Anastasia Razdaibiedina, Yuning Mao, Madian Khabsa, Mike Lewis, Rui Hou, Jimmy Ba, Amjad Almahairi:
Residual Prompt Tuning: improving prompt tuning with residual reparameterization. ACL (Findings) 2023: 6740-6757 - [c61]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. ACL (Findings) 2023: 8857-8873 - [c60]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. ACL (1) 2023: 12286-12312 - [c59]Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis:
AutoReply: Detecting Nonsense in Dialogue with Discriminative Replies. EMNLP (Findings) 2023: 294-309 - [c58]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. EMNLP (Findings) 2023: 5687-5711 - [c57]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. EMNLP 2023: 12076-12100 - [c56]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Scott Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. ICLR 2023 - [c55]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi:
Progressive Prompts: Continual Learning for Language Models. ICLR 2023 - [c54]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-Tau Yih:
Retrieval-Augmented Multimodal Language Modeling. ICML 2023: 39755-39769 - [c53]Tianyi Zhang, Tao Yu, Tatsunori Hashimoto, Mike Lewis, Wen-Tau Yih, Daniel Fried, Sida Wang:
Coder Reviewer Reranking for Code Generation. ICML 2023: 41832-41846 - [c52]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. NeurIPS 2023 - [c51]Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. NeurIPS 2023 - [i67]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Amjad Almahairi:
Progressive Prompts: Continual Learning for Language Models. CoRR abs/2301.12314 (2023) - [i66]Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
REPLUG: Retrieval-Augmented Black-Box Language Models. CoRR abs/2301.12652 (2023) - [i65]Suchin Gururangan, Margaret Li, Mike Lewis, Weijia Shi, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Scaling Expert Language Models with Unsupervised Domain Discovery. CoRR abs/2303.14177 (2023) - [i64]Anastasia Razdaibiedina, Yuning Mao, Rui Hou, Madian Khabsa, Mike Lewis, Jimmy Ba, Amjad Almahairi:
Residual Prompt Tuning: Improving Prompt Tuning with Residual Reparameterization. CoRR abs/2305.03937 (2023) - [i63]Lili Yu, Daniel Simig, Colin Flaherty, Armen Aghajanyan, Luke Zettlemoyer, Mike Lewis:
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers. CoRR abs/2305.07185 (2023) - [i62]Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, Omer Levy:
LIMA: Less Is More for Alignment. CoRR abs/2305.11206 (2023) - [i61]Sewon Min, Kalpesh Krishna, Xinxi Lyu, Mike Lewis, Wen-tau Yih, Pang Wei Koh, Mohit Iyyer, Luke Zettlemoyer, Hannaneh Hajishirzi:
FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation. CoRR abs/2305.14251 (2023) - [i60]Weijia Shi, Xiaochuang Han, Mike Lewis, Yulia Tsvetkov, Luke Zettlemoyer, Scott Wen-tau Yih:
Trusting Your Evidence: Hallucinate Less with Context-aware Decoding. CoRR abs/2305.14739 (2023) - [i59]Xian Li, Ping Yu, Chunting Zhou, Timo Schick, Luke Zettlemoyer, Omer Levy, Jason Weston, Mike Lewis:
Self-Alignment with Instruction Backtranslation. CoRR abs/2308.06259 (2023) - [i58]Sean O'Brien, Mike Lewis:
Contrastive Decoding Improves Reasoning in Large Language Models. CoRR abs/2309.09117 (2023) - [i57]Wenhan Xiong, Jingyu Liu, Igor Molybog, Hejia Zhang, Prajjwal Bhargava, Rui Hou, Louis Martin, Rashi Rungta, Karthik Abinav Sankararaman, Barlas Oguz, Madian Khabsa, Han Fang, Yashar Mehdad, Sharan Narang, Kshitiz Malik, Angela Fan, Shruti Bhosale, Sergey Edunov, Mike Lewis, Sinong Wang, Hao Ma:
Effective Long-Context Scaling of Foundation Models. CoRR abs/2309.16039 (2023) - [i56]Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis:
Efficient Streaming Language Models with Attention Sinks. CoRR abs/2309.17453 (2023) - [i55]Xi Victoria Lin, Xilun Chen, Mingda Chen, Weijia Shi, Maria Lomeli, Rich James, Pedro Rodriguez, Jacob Kahn, Gergely Szilvasy, Mike Lewis, Luke Zettlemoyer, Scott Yih:
RA-DIT: Retrieval-Augmented Dual Instruction Tuning. CoRR abs/2310.01352 (2023) - [i54]Weijia Shi, Sewon Min, Maria Lomeli, Chunting Zhou, Margaret Li, Xi Victoria Lin, Noah A. Smith, Luke Zettlemoyer, Scott Yih, Mike Lewis:
In-Context Pretraining: Language Modeling Beyond Document Boundaries. CoRR abs/2310.10638 (2023) - 2022
- [c50]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. ACL (Findings) 2022: 711-728 - [c49]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. ACL (1) 2022: 5316-5330 - [c48]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. EMNLP 2022: 3781-3797 - [c47]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? EMNLP 2022: 11048-11064 - [c46]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. ICLR 2022 - [c45]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c44]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. ICLR 2022 - [c43]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Student Models. NAACL-HLT 2022: 2361-2375 - [c42]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. NAACL-HLT 2022: 2791-2809 - [c41]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. NAACL-HLT 2022: 3340-3345 - [c40]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. NAACL-HLT 2022: 5557-5576 - [c39]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [i53]Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer:
CM3: A Causal Masked Multimodal Model of the Internet. CoRR abs/2201.07520 (2022) - [i52]Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? CoRR abs/2202.12837 (2022) - [i51]Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer, Mike Lewis:
InCoder: A Generative Model for Code Infilling and Synthesis. CoRR abs/2204.05999 (2022) - [i50]Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, Luke Zettlemoyer:
Improving Passage Retrieval with Zero-Shot Question Generation. CoRR abs/2204.07496 (2022) - [i49]Mandar Joshi, Terra Blevins, Mike Lewis, Daniel S. Weld, Luke Zettlemoyer:
Few-shot Mining of Naturally Occurring Inputs and Outputs. CoRR abs/2205.04050 (2022) - [i48]Siddharth Dalmia, Dmytro Okhonko, Mike Lewis, Sergey Edunov, Shinji Watanabe, Florian Metze, Luke Zettlemoyer, Abdelrahman Mohamed:
LegoNN: Building Modular Encoder-Decoder Models. CoRR abs/2206.03318 (2022) - [i47]Devendra Singh Sachan, Mike Lewis, Dani Yogatama, Luke Zettlemoyer, Joelle Pineau, Manzil Zaheer:
Questions Are All You Need to Train a Dense Passage Retriever. CoRR abs/2206.10658 (2022) - [i46]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i45]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i44]Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, Mike Lewis:
Measuring and Narrowing the Compositionality Gap in Language Models. CoRR abs/2210.03350 (2022) - [i43]Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis:
Contrastive Decoding: Open-ended Text Generation as Optimization. CoRR abs/2210.15097 (2022) - [i42]Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Rich James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, Wen-tau Yih:
Retrieval-Augmented Multimodal Language Modeling. CoRR abs/2211.12561 (2022) - [i41]Weiyan Shi, Emily Dinan, Adi Renduchintala, Daniel Fried, Athul Paul Jacob, Zhou Yu, Mike Lewis:
AutoReply: Detecting Nonsense in Dialogue Introspectively with Discriminative Replies. CoRR abs/2211.12615 (2022) - [i40]Tianyi Zhang, Tao Yu, Tatsunori B. Hashimoto, Mike Lewis, Wen-tau Yih, Daniel Fried, Sida I. Wang:
Coder Reviewer Reranking for Code Generation. CoRR abs/2211.16490 (2022) - [i39]Sewon Min, Weijia Shi, Mike Lewis, Xilun Chen, Wen-tau Yih, Hannaneh Hajishirzi, Luke Zettlemoyer:
Nonparametric Masked Language Modeling. CoRR abs/2212.01349 (2022) - [i38]Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad:
In-context Examples Selection for Machine Translation. CoRR abs/2212.02437 (2022) - [i37]Andrew Lee, David Wu, Emily Dinan, Mike Lewis:
Improving Chess Commentaries by Combining Language Models with Symbolic Reasoning Engines. CoRR abs/2212.08195 (2022) - 2021
- [c38]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. ACL/IJCNLP (1) 2021: 5493-5505 - [c37]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. ACL/IJCNLP (1) 2021: 6787-6799 - [c36]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. ICLR 2021 - [c35]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c34]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. NAACL-HLT 2021: 5351-5366 - [i36]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i35]Athul Paul Jacob, Mike Lewis, Jacob Andreas:
Multitasking Inhibits Semantic Drift. CoRR abs/2104.07219 (2021) - [i34]Robin Jia, Mike Lewis, Luke Zettlemoyer:
Question Answering Infused Pre-training of General-Purpose Contextualized Representations. CoRR abs/2106.08190 (2021) - [i33]Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer:
HTLM: Hyper-Text Pre-Training and Prompting of Language Models. CoRR abs/2107.06955 (2021) - [i32]Sewon Min, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer:
Noisy Channel Language Model Prompting for Few-Shot Text Classification. CoRR abs/2108.04106 (2021) - [i31]Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer:
DEMix Layers: Disentangling Domains for Modular Language Modeling. CoRR abs/2108.05036 (2021) - [i30]Ofir Press, Noah A. Smith, Mike Lewis:
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation. CoRR abs/2108.12409 (2021) - [i29]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - [i28]Dheeru Dua, Shruti Bhosale, Vedanuj Goswami, James Cross, Mike Lewis, Angela Fan:
Tricks for Training Sparse Translation Models. CoRR abs/2110.08246 (2021) - [i27]Qinyuan Ye, Madian Khabsa, Mike Lewis, Sinong Wang, Xiang Ren, Aaron Jaech:
Sparse Distillation: Speeding Up Text Classification by Using Bigger Models. CoRR abs/2110.08536 (2021) - [i26]Sewon Min, Mike Lewis, Luke Zettlemoyer, Hannaneh Hajishirzi:
MetaICL: Learning to Learn In Context. CoRR abs/2110.15943 (2021) - 2020
- [j5]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. Trans. Assoc. Comput. Linguistics 8: 726-742 (2020) - [c33]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. ACL 2020: 5008-5020 - [c32]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. ACL 2020: 7871-7880 - [c31]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Michael Haeger, Haoran Li, Yashar Mehdad, Veselin Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. EMNLP (1) 2020: 5026-5035 - [c30]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. EMNLP (1) 2020: 6869-6882 - [c29]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. ICLR 2020 - [c28]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. NeurIPS 2020 - [c27]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS 2020 - [i25]Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer:
Multilingual Denoising Pre-training for Neural Machine Translation. CoRR abs/2001.08210 (2020) - [i24]Alex Wang, Kyunghyun Cho, Mike Lewis:
Asking and Answering Questions to Evaluate the Factual Consistency of Summaries. CoRR abs/2004.04228 (2020) - [i23]Patrick S. H. Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela:
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. CoRR abs/2005.11401 (2020) - [i22]Mike Lewis, Marjan Ghazvininejad, Gargi Ghosh, Armen Aghajanyan, Sida I. Wang, Luke Zettlemoyer:
Pre-training via Paraphrasing. CoRR abs/2006.15020 (2020) - [i21]Victor Zhong, Mike Lewis, Sida I. Wang, Luke Zettlemoyer:
Grounded Adaptation for Zero-shot Executable Semantic Parsing. CoRR abs/2009.07396 (2020) - [i20]Armen Aghajanyan, Jean Maillard, Akshat Shrivastava, Keith Diedrick, Mike Haeger, Haoran Li, Yashar Mehdad, Ves Stoyanov, Anuj Kumar, Mike Lewis, Sonal Gupta:
Conversational Semantic Parsing. CoRR abs/2009.13655 (2020) - [i19]Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Nearest Neighbor Machine Translation. CoRR abs/2010.00710 (2020) - [i18]Michael Sejr Schlichtkrull, Vladimir Karpukhin, Barlas Oguz, Mike Lewis, Wen-tau Yih, Sebastian Riedel:
Joint Verification and Reranking for Open Fact Checking Over Tables. CoRR abs/2012.15115 (2020) - [i17]Ofir Press, Noah A. Smith, Mike Lewis:
Shortformer: Better Language Modeling using Shorter Inputs. CoRR abs/2012.15832 (2020)
2010 – 2019
- 2019
- [c26]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. ACL (1) 2019: 2650-2660 - [c25]Akshat Agarwal, Swaminathan Gurumurthy, Vasu Sharma, Mike Lewis, Katia P. Sycara:
Community Regularization of Visually-Grounded Dialog. AAMAS 2019: 1042-1050 - [c24]Panupong Pasupat, Sonal Gupta, Karishma Mandyam, Rushin Shah, Mike Lewis, Luke Zettlemoyer:
Span-based Hierarchical Semantic Parsing for Task-Oriented Dialog. EMNLP/IJCNLP (1) 2019: 1520-1526 - [c23]Mike Lewis, Angela Fan:
Generative Question Answering: Learning to Answer the Whole Question. ICLR (Poster) 2019 - [c22]Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis:
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. NAACL-HLT (1) 2019: 3795-3805 - [c21]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. NeurIPS 2019: 10025-10034 - [i16]Angela Fan, Mike Lewis, Yann N. Dauphin:
Strategies for Structuring Story Generation. CoRR abs/1902.01109 (2019) - [i15]Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, Luke Zettlemoyer:
Improving Semantic Parsing for Task Oriented Dialog. CoRR abs/1902.06000 (2019) - [i14]Hengyuan Hu, Denis Yarats, Qucheng Gong, Yuandong Tian, Mike Lewis:
Hierarchical Decision Making by Generating and Following Natural Language Instructions. CoRR abs/1906.00744 (2019) - [i13]Sean Vasquez, Mike Lewis:
MelNet: A Generative Model for Audio in the Frequency Domain. CoRR abs/1906.01083 (2019) - [i12]Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov:
RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019) - [i11]Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer:
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. CoRR abs/1910.13461 (2019) - [i10]Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis:
Generalization through Memorization: Nearest Neighbor Language Models. CoRR abs/1911.00172 (2019) - [i9]Siddharth Dalmia, Abdelrahman Mohamed, Mike Lewis, Florian Metze, Luke Zettlemoyer:
Enforcing Encoder-Decoder Modularity in Sequence-to-Sequence Models. CoRR abs/1911.03782 (2019) - 2018
- [j4]Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi:
Evaluating Visual Reasoning through Grounded Language Understanding. AI Mag. 39(2): 45-52 (2018) - [c20]Angela Fan, Mike Lewis, Yann N. Dauphin:
Hierarchical Neural Story Generation. ACL (1) 2018: 889-898 - [c19]Spandana Gella, Mike Lewis, Marcus Rohrbach:
A Dataset for Telling the Stories of Social Media Videos. EMNLP 2018: 968-974 - [c18]Nitish Gupta, Mike Lewis:
Neural Compositional Denotational Semantics for Question Answering. EMNLP 2018: 2152-2161 - [c17]Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis:
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations. EMNLP 2018: 2787-2792 - [c16]Denis Yarats, Mike Lewis:
Hierarchical Text Generation and Planning for Strategic Dialogue. ICML 2018: 5587-5595 - [c15]Paul C. Hershey, Mike Sica, Mike Lewis:
Common ground control system (CGCS) to support autonomous object observation, collection, and response in multi-domain environments. SysCon 2018: 1-6 - [i8]Angela Fan, Mike Lewis, Yann N. Dauphin:
Hierarchical Neural Story Generation. CoRR abs/1805.04833 (2018) - [i7]Nitish Gupta, Mike Lewis:
Neural Compositional Denotational Semantics for Question Answering. CoRR abs/1808.09942 (2018) - [i6]Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, Mike Lewis:
Semantic Parsing for Task Oriented Dialog using Hierarchical Representations. CoRR abs/1810.07942 (2018) - [i5]Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis:
Cross-lingual Transfer Learning for Multilingual Task Oriented Dialog. CoRR abs/1810.13327 (2018) - 2017
- [c14]Alane Suhr, Mike Lewis, James Yeh, Yoav Artzi:
A Corpus of Natural Language for Visual Reasoning. ACL (2) 2017: 217-223 - [c13]Luheng He, Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Deep Semantic Role Labeling: What Works and What's Next. ACL (1) 2017: 473-483 - [c12]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. EMNLP 2017: 188-197 - [c11]Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra:
Deal or No Deal? End-to-End Learning of Negotiation Dialogues. EMNLP 2017: 2443-2453 - [i4]Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, Dhruv Batra:
Deal or No Deal? End-to-End Learning for Negotiation Dialogues. CoRR abs/1706.05125 (2017) - [i3]Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer:
End-to-end Neural Coreference Resolution. CoRR abs/1707.07045 (2017) - [i2]Denis Yarats, Mike Lewis:
Hierarchical Text Generation and Planning for Strategic Dialogue. CoRR abs/1712.05846 (2017) - 2016
- [c10]Luheng He, Julian Michael, Mike Lewis, Luke Zettlemoyer:
Human-in-the-Loop Parsing. EMNLP 2016: 2337-2342 - [c9]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. EMNLP 2016: 2366-2376 - [c8]Mike Lewis, Kenton Lee, Luke Zettlemoyer:
LSTM CCG Parsing. HLT-NAACL 2016: 221-231 - [i1]Kenton Lee, Mike Lewis, Luke Zettlemoyer:
Global Neural CCG Parsing with Optimality Guarantees. CoRR abs/1607.01432 (2016) - 2015
- [c7]Luheng He, Mike Lewis, Luke Zettlemoyer:
Question-Answer Driven Semantic Role Labeling: Using Natural Language to Annotate Natural Language. EMNLP 2015: 643-653 - [c6]Mike Lewis, Luheng He, Luke Zettlemoyer:
Joint A* CCG Parsing and Semantic Role Labelling. EMNLP 2015: 1444-1454 - 2014
- [j3]Mike Lewis, Mark Steedman:
Improved CCG Parsing with Semi-supervised Supertagging. Trans. Assoc. Comput. Linguistics 2: 327-338 (2014) - [c5]Mike Lewis, Mark Steedman:
A* CCG Parsing with a Supertag-factored Model. EMNLP 2014: 990-1000 - [c4]Peter Kaiser, Mike Lewis, Ronald P. A. Petrick, Tamim Asfour, Mark Steedman:
Extracting common sense knowledge from text for robot planning. ICRA 2014: 3749-3756 - 2013
- [j2]Mike Lewis, Mark Steedman:
Combined Distributional and Logical Semantics. Trans. Assoc. Comput. Linguistics 1: 179-192 (2013) - [c3]Mike Lewis, Mark Steedman:
Unsupervised Induction of Cross-Lingual Semantic Relations. EMNLP 2013: 681-692 - [c2]Kai Welke, Peter Kaiser, Alexey Kozlov, Nils Adermann, Tamim Asfour, Mike Lewis, Mark Steedman:
Grounded spatial symbols for task planning based on experience. Humanoids 2013: 484-491 - [c1]Imran Khan Azeemi, Mike Lewis, Theo Tryfonas:
Migrating To The Cloud: Lessons And Limitations Of 'Traditional' IS Success Models. CSER 2013: 737-746
2000 – 2009
- 2007
- [j1]David S. Wishart, Dan Tzur, Craig Knox, Roman Eisner, Anchi Guo, Nelson Young, Dean Cheng, Kevin Jewell, David Arndt, Summit Sawhney, Chris Fung, Lisa Nikolai, Mike Lewis, Marie-Aude Coutouly, Ian J. Forsythe, Peter Tang, Savita Shrivastava, Kevin Jeroncic, Paul Stothard, Godwin Amegbey, David Block, David D. Hau, James Wagner, Jessica Miniaci, Melisa Clements, Mulu Gebremedhin, Natalie Guo, Ying Zhang, Gavin E. Duggan, Glen D. MacInnis, Alim M. Weljie, Reza Dowlatabadi, Fiona Bamforth, Derrick Clive, Russell Greiner, Liang Li, Tom Marrie, Brian D. Sykes, Hans J. Vogel, Lori Querengesser:
HMDB: the Human Metabolome Database. Nucleic Acids Res. 35(Database-Issue): 521-526 (2007)
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-22 20:12 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint