Alert button
Picture for Adam Tauman Kalai

Adam Tauman Kalai

Alert button

Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding

Jan 23, 2024
Mirac Suzgun, Adam Tauman Kalai

Viaarxiv icon

Calibrated Language Models Must Hallucinate

Dec 03, 2023
Adam Tauman Kalai, Santosh S. Vempala

Viaarxiv icon

Testing Language Model Agents Safely in the Wild

Dec 03, 2023
Silen Naihin, David Atkinson, Marc Green, Merwane Hamadi, Craig Swift, Douglas Schonholtz, Adam Tauman Kalai, David Bau

Figure 1 for Testing Language Model Agents Safely in the Wild
Figure 2 for Testing Language Model Agents Safely in the Wild
Figure 3 for Testing Language Model Agents Safely in the Wild
Figure 4 for Testing Language Model Agents Safely in the Wild
Viaarxiv icon

Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation

Oct 03, 2023
Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai

Figure 1 for Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Figure 2 for Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Figure 3 for Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Figure 4 for Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation
Viaarxiv icon

Textbooks Are All You Need

Jun 20, 2023
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, Sébastien Bubeck, Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, Yuanzhi Li

Figure 1 for Textbooks Are All You Need
Figure 2 for Textbooks Are All You Need
Figure 3 for Textbooks Are All You Need
Figure 4 for Textbooks Are All You Need
Viaarxiv icon

Do Language Models Know When They're Hallucinating References?

May 29, 2023
Ayush Agrawal, Lester Mackey, Adam Tauman Kalai

Figure 1 for Do Language Models Know When They're Hallucinating References?
Figure 2 for Do Language Models Know When They're Hallucinating References?
Figure 3 for Do Language Models Know When They're Hallucinating References?
Figure 4 for Do Language Models Know When They're Hallucinating References?
Viaarxiv icon

Loss minimization yields multicalibration for large neural networks

Apr 19, 2023
Jarosław Błasiok, Parikshit Gopalan, Lunjia Hu, Adam Tauman Kalai, Preetum Nakkiran

Viaarxiv icon