default search action
Tim Dettmers
Person information
SPARQL queries
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c16]Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, Dan Alistarh:
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. ICLR 2024 - [i19]Rulin Shao, Jacqueline He, Akari Asai, Weijia Shi, Tim Dettmers, Sewon Min, Luke Zettlemoyer, Pang Wei Koh:
Scaling Retrieval-Based Language Models with a Trillion-Token Datastore. CoRR abs/2407.12854 (2024) - [i18]Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi:
OLMoE: Open Mixture-of-Experts Language Models. CoRR abs/2409.02060 (2024) - 2023
- [c15]Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Maksim Riabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel:
Petals: Collaborative Inference and Fine-tuning of Large Models. ACL (demo) 2023: 558-568 - [c14]Zeyu Liu, Tim Dettmers, Xi Lin, Veselin Stoyanov, Xian Li:
Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model. EMNLP 2023: 15038-15061 - [c13]Tim Dettmers, Luke Zettlemoyer:
The case for 4-bit precision: k-bit Inference Scaling Laws. ICML 2023: 7750-7774 - [c12]Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov:
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient. ICML 2023: 29416-29440 - [c11]Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, Colin A. Raffel:
Distributed Inference and Fine-tuning of Large Language Models Over The Internet. NeurIPS 2023 - [c10]Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer:
QLoRA: Efficient Finetuning of Quantized LLMs. NeurIPS 2023 - [c9]Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt:
Stable and low-precision training for large-scale vision-language models. NeurIPS 2023 - [i17]Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov:
SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient. CoRR abs/2301.11913 (2023) - [i16]Mitchell Wortsman, Tim Dettmers, Luke Zettlemoyer, Ari Morcos, Ali Farhadi, Ludwig Schmidt:
Stable and low-precision training for large-scale vision-language models. CoRR abs/2304.13013 (2023) - [i15]Leo Z. Liu, Tim Dettmers, Xi Victoria Lin, Veselin Stoyanov, Xian Li:
Towards A Unified View of Sparse Feed-Forward Network in Pretraining Large Language Model. CoRR abs/2305.13999 (2023) - [i14]Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer:
QLoRA: Efficient Finetuning of Quantized LLMs. CoRR abs/2305.14314 (2023) - [i13]Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, Dan Alistarh:
SpQR: A Sparse-Quantized Representation for Near-Lossless LLM Weight Compression. CoRR abs/2306.03078 (2023) - [i12]Devvrit, Sneha Kudugunta, Aditya Kusupati, Tim Dettmers, Kaifeng Chen, Inderjit S. Dhillon, Yulia Tsvetkov, Hannaneh Hajishirzi, Sham M. Kakade, Ali Farhadi, Prateek Jain:
MatFormer: Nested Transformer for Elastic Inference. CoRR abs/2310.07707 (2023) - [i11]Alexander Borzunov, Max Ryabinin, Artem Chumachenko, Dmitry Baranchuk, Tim Dettmers, Younes Belkada, Pavel Samygin, Colin Raffel:
Distributed Inference and Fine-tuning of Large Language Models Over The Internet. CoRR abs/2312.08361 (2023) - 2022
- [c8]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. ICLR 2022 - [c7]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale. NeurIPS 2022 - [i10]Alexander Borzunov, Max Ryabinin, Tim Dettmers, Quentin Lhoest, Lucile Saulnier, Michael Diskin, Yacine Jernite, Thomas Wolf:
Training Transformers Together. CoRR abs/2207.03481 (2022) - [i9]Margaret Li, Suchin Gururangan, Tim Dettmers, Mike Lewis, Tim Althoff, Noah A. Smith, Luke Zettlemoyer:
Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models. CoRR abs/2208.03306 (2022) - [i8]Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer:
LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale. CoRR abs/2208.07339 (2022) - [i7]Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, Colin Raffel:
Petals: Collaborative Inference and Fine-tuning of Large Models. CoRR abs/2209.01188 (2022) - [i6]Tim Dettmers, Luke Zettlemoyer:
The case for 4-bit precision: k-bit Inference Scaling Laws. CoRR abs/2212.09720 (2022) - 2021
- [c6]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. ICML 2021: 6265-6274 - [c5]Alexander Borzunov, Max Ryabinin, Tim Dettmers, Quentin Lhoest, Lucile Saulnier, Michael Diskin, Yacine Jernite, Thomas Wolf:
Training Transformers Together. NeurIPS (Competition and Demos) 2021: 335-342 - [i5]Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, Luke Zettlemoyer:
BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021) - [i4]Tim Dettmers, Mike Lewis, Sam Shleifer, Luke Zettlemoyer:
8-bit Optimizers via Block-wise Quantization. CoRR abs/2110.02861 (2021) - 2020
- [c4]Gabriel Ilharco, Cesar Ilharco, Iulia Turc, Tim Dettmers, Felipe Ferreira, Kenton Lee:
High Performance Natural Language Processing. EMNLP (Tutorial Abstracts) 2020: 24-27
2010 – 2019
- 2019
- [i3]Tim Dettmers, Luke Zettlemoyer:
Sparse Networks from Scratch: Faster Training without Losing Performance. CoRR abs/1907.04840 (2019) - 2018
- [c3]Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel:
Convolutional 2D Knowledge Graph Embeddings. AAAI 2018: 1811-1818 - [c2]Dirk Weissenborn, Pasquale Minervini, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bosnjak, Jeff Mitchell, Thomas Demeester, Tim Dettmers, Pontus Stenetorp, Sebastian Riedel:
Jack the Reader - A Machine Reading Framework. ACL (4) 2018: 25-30 - [i2]Dirk Weissenborn, Pasquale Minervini, Tim Dettmers, Isabelle Augenstein, Johannes Welbl, Tim Rocktäschel, Matko Bosnjak, Jeff Mitchell, Thomas Demeester, Pontus Stenetorp, Sebastian Riedel:
Jack the Reader - A Machine Reading Framework. CoRR abs/1806.08727 (2018) - 2017
- [i1]Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel:
Convolutional 2D Knowledge Graph Embeddings. CoRR abs/1707.01476 (2017) - 2016
- [c1]Tim Dettmers:
8-Bit Approximations for Parallelism in Deep Learning. ICLR (Poster) 2016
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 01:20 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint