Alert button
Picture for Ari Holtzman

Ari Holtzman

Alert button

CacheGen: Fast Context Loading for Language Model Applications

Oct 11, 2023
Yuhan Liu, Hanchen Li, Kuntai Du, Jiayi Yao, Yihua Cheng, Yuyang Huang, Shan Lu, Michael Maire, Henry Hoffmann, Ari Holtzman, Ganesh Ananthanarayanan, Junchen Jiang

Figure 1 for CacheGen: Fast Context Loading for Language Model Applications
Figure 2 for CacheGen: Fast Context Loading for Language Model Applications
Figure 3 for CacheGen: Fast Context Loading for Language Model Applications
Figure 4 for CacheGen: Fast Context Loading for Language Model Applications
Viaarxiv icon

How FaR Are Large Language Models From Agents with Theory-of-Mind?

Oct 04, 2023
Pei Zhou, Aman Madaan, Srividya Pranavi Potharaju, Aditya Gupta, Kevin R. McKee, Ari Holtzman, Jay Pujara, Xiang Ren, Swaroop Mishra, Aida Nematzadeh, Shyam Upadhyay, Manaal Faruqui

Figure 1 for How FaR Are Large Language Models From Agents with Theory-of-Mind?
Figure 2 for How FaR Are Large Language Models From Agents with Theory-of-Mind?
Figure 3 for How FaR Are Large Language Models From Agents with Theory-of-Mind?
Figure 4 for How FaR Are Large Language Models From Agents with Theory-of-Mind?
Viaarxiv icon

Generative Models as a Complex Systems Science: How can we make sense of large language model behavior?

Jul 31, 2023
Ari Holtzman, Peter West, Luke Zettlemoyer

Viaarxiv icon

QLoRA: Efficient Finetuning of Quantized LLMs

May 23, 2023
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, Luke Zettlemoyer

Figure 1 for QLoRA: Efficient Finetuning of Quantized LLMs
Figure 2 for QLoRA: Efficient Finetuning of Quantized LLMs
Figure 3 for QLoRA: Efficient Finetuning of Quantized LLMs
Figure 4 for QLoRA: Efficient Finetuning of Quantized LLMs
Viaarxiv icon

Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?

Dec 20, 2022
Weijia Shi, Xiaochuang Han, Hila Gonen, Ari Holtzman, Yulia Tsvetkov, Luke Zettlemoyer

Figure 1 for Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Figure 2 for Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Figure 3 for Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Figure 4 for Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
Viaarxiv icon

Contrastive Decoding: Open-ended Text Generation as Optimization

Oct 27, 2022
Xiang Lisa Li, Ari Holtzman, Daniel Fried, Percy Liang, Jason Eisner, Tatsunori Hashimoto, Luke Zettlemoyer, Mike Lewis

Figure 1 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 2 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 3 for Contrastive Decoding: Open-ended Text Generation as Optimization
Figure 4 for Contrastive Decoding: Open-ended Text Generation as Optimization
Viaarxiv icon

What Do NLP Researchers Believe? Results of the NLP Community Metasurvey

Aug 26, 2022
Julian Michael, Ari Holtzman, Alicia Parrish, Aaron Mueller, Alex Wang, Angelica Chen, Divyam Madaan, Nikita Nangia, Richard Yuanzhe Pang, Jason Phang, Samuel R. Bowman

Figure 1 for What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
Figure 2 for What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
Figure 3 for What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
Figure 4 for What Do NLP Researchers Believe? Results of the NLP Community Metasurvey
Viaarxiv icon

Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?

Feb 25, 2022
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer

Figure 1 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 2 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 3 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Figure 4 for Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?
Viaarxiv icon

DEMix Layers: Disentangling Domains for Modular Language Modeling

Aug 20, 2021
Suchin Gururangan, Mike Lewis, Ari Holtzman, Noah A. Smith, Luke Zettlemoyer

Figure 1 for DEMix Layers: Disentangling Domains for Modular Language Modeling
Figure 2 for DEMix Layers: Disentangling Domains for Modular Language Modeling
Figure 3 for DEMix Layers: Disentangling Domains for Modular Language Modeling
Figure 4 for DEMix Layers: Disentangling Domains for Modular Language Modeling
Viaarxiv icon