Alert button
Picture for Saurabh Shah

Saurabh Shah

Alert button

OLMo: Accelerating the Science of Language Models

Feb 07, 2024
Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi

Viaarxiv icon

Explanation-based Finetuning Makes Models More Robust to Spurious Cues

May 08, 2023
Josh Magnus Ludan, Yixuan Meng, Tai Nguyen, Saurabh Shah, Qing Lyu, Marianna Apidianaki, Chris Callison-Burch

Figure 1 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 2 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 3 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Figure 4 for Explanation-based Finetuning Makes Models More Robust to Spurious Cues
Viaarxiv icon