Alert button
Picture for Nathan Scales

Nathan Scales

Alert button

Large Language Models Can Be Easily Distracted by Irrelevant Context

Feb 13, 2023
Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed Chi, Nathanael Schärli, Denny Zhou

Figure 1 for Large Language Models Can Be Easily Distracted by Irrelevant Context
Figure 2 for Large Language Models Can Be Easily Distracted by Irrelevant Context
Figure 3 for Large Language Models Can Be Easily Distracted by Irrelevant Context
Figure 4 for Large Language Models Can Be Easily Distracted by Irrelevant Context
Viaarxiv icon

Large Language Models Encode Clinical Knowledge

Dec 26, 2022
Karan Singhal, Shekoofeh Azizi, Tao Tu, S. Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, Perry Payne, Martin Seneviratne, Paul Gamble, Chris Kelly, Nathaneal Scharli, Aakanksha Chowdhery, Philip Mansfield, Blaise Aguera y Arcas, Dale Webster, Greg S. Corrado, Yossi Matias, Katherine Chou, Juraj Gottweis, Nenad Tomasev, Yun Liu, Alvin Rajkomar, Joelle Barral, Christopher Semturs, Alan Karthikesalingam, Vivek Natarajan

Figure 1 for Large Language Models Encode Clinical Knowledge
Figure 2 for Large Language Models Encode Clinical Knowledge
Figure 3 for Large Language Models Encode Clinical Knowledge
Figure 4 for Large Language Models Encode Clinical Knowledge
Viaarxiv icon

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them

Oct 17, 2022
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, Jason Wei

Figure 1 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 2 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 3 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Figure 4 for Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them
Viaarxiv icon

Compositional Semantic Parsing with Large Language Models

Sep 30, 2022
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou

Figure 1 for Compositional Semantic Parsing with Large Language Models
Figure 2 for Compositional Semantic Parsing with Large Language Models
Figure 3 for Compositional Semantic Parsing with Large Language Models
Figure 4 for Compositional Semantic Parsing with Large Language Models
Viaarxiv icon

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

May 21, 2022
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, Ed Chi

Figure 1 for Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
Figure 2 for Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
Figure 3 for Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
Figure 4 for Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
Viaarxiv icon

*-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task

Dec 15, 2020
Dmitry Tsarkov, Tibor Tihon, Nathan Scales, Nikola Momchev, Danila Sinopalnikov, Nathanael Schärli

Figure 1 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 2 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 3 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Figure 4 for *-CFQ: Analyzing the Scalability of Machine Learning on a Compositional Task
Viaarxiv icon

Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures

Jul 21, 2020
Daniel Furrer, Marc van Zee, Nathan Scales, Nathanael Schärli

Figure 1 for Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures
Figure 2 for Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures
Figure 3 for Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures
Figure 4 for Compositional Generalization in Semantic Parsing: Pre-training vs. Specialized Architectures
Viaarxiv icon