Alert button
Picture for Nathan Lambert

Nathan Lambert

Alert button

RewardBench: Evaluating Reward Models for Language Modeling

Mar 20, 2024
Nathan Lambert, Valentina Pyatkin, Jacob Morrison, LJ Miranda, Bill Yuchen Lin, Khyathi Chandu, Nouha Dziri, Sachin Kumar, Tom Zick, Yejin Choi, Noah A. Smith, Hannaneh Hajishirzi

Viaarxiv icon

A Survey on Data Selection for Language Models

Mar 08, 2024
Alon Albalak, Yanai Elazar, Sang Michael Xie, Shayne Longpre, Nathan Lambert, Xinyi Wang, Niklas Muennighoff, Bairu Hou, Liangming Pan, Haewon Jeong, Colin Raffel, Shiyu Chang, Tatsunori Hashimoto, William Yang Wang

Viaarxiv icon

OLMo: Accelerating the Science of Language Models

Feb 07, 2024
Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi

Viaarxiv icon

Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research

Jan 31, 2024
Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, Kyle Lo

Viaarxiv icon

Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2

Nov 20, 2023
Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A. Smith, Iz Beltagy, Hannaneh Hajishirzi

Figure 1 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 2 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 3 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Figure 4 for Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2
Viaarxiv icon

The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback

Oct 31, 2023
Nathan Lambert, Roberto Calandra

Figure 1 for The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Figure 2 for The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Figure 3 for The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Figure 4 for The Alignment Ceiling: Objective Mismatch in Reinforcement Learning from Human Feedback
Viaarxiv icon

Zephyr: Direct Distillation of LM Alignment

Oct 25, 2023
Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, Nathan Sarrazin, Omar Sanseviero, Alexander M. Rush, Thomas Wolf

Figure 1 for Zephyr: Direct Distillation of LM Alignment
Figure 2 for Zephyr: Direct Distillation of LM Alignment
Figure 3 for Zephyr: Direct Distillation of LM Alignment
Figure 4 for Zephyr: Direct Distillation of LM Alignment
Viaarxiv icon

A Unified View on Solving Objective Mismatch in Model-Based Reinforcement Learning

Oct 10, 2023
Ran Wei, Nathan Lambert, Anthony McDonald, Alfredo Garcia, Roberto Calandra

Figure 1 for A Unified View on Solving Objective Mismatch in Model-Based Reinforcement Learning
Viaarxiv icon