Alert button
Picture for Angelica Chen

Angelica Chen

Alert button

AI safety by debate via regret minimization

Dec 08, 2023
Xinyi Chen, Angelica Chen, Dean Foster, Elad Hazan

Viaarxiv icon

Sudden Drops in the Loss: Syntax Acquisition, Phase Transitions, and Simplicity Bias in MLMs

Sep 28, 2023
Angelica Chen, Ravid Shwartz-Ziv, Kyunghyun Cho, Matthew L. Leavitt, Naomi Saphra

Viaarxiv icon

Latent State Models of Training Dynamics

Aug 18, 2023
Michael Y. Hu, Angelica Chen, Naomi Saphra, Kyunghyun Cho

Figure 1 for Latent State Models of Training Dynamics
Figure 2 for Latent State Models of Training Dynamics
Figure 3 for Latent State Models of Training Dynamics
Figure 4 for Latent State Models of Training Dynamics
Viaarxiv icon

Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs

May 23, 2023
Angelica Chen, Jason Phang, Alicia Parrish, Vishakh Padmakumar, Chen Zhao, Samuel R. Bowman, Kyunghyun Cho

Figure 1 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 2 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 3 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Figure 4 for Two Failures of Self-Consistency in the Multi-Step Reasoning of LLMs
Viaarxiv icon

Training Language Models with Language Feedback at Scale

Apr 09, 2023
Jérémy Scheurer, Jon Ander Campos, Tomasz Korbak, Jun Shern Chan, Angelica Chen, Kyunghyun Cho, Ethan Perez

Figure 1 for Training Language Models with Language Feedback at Scale
Figure 2 for Training Language Models with Language Feedback at Scale
Figure 3 for Training Language Models with Language Feedback at Scale
Figure 4 for Training Language Models with Language Feedback at Scale
Viaarxiv icon

Improving Code Generation by Training with Natural Language Feedback

Mar 28, 2023
Angelica Chen, Jérémy Scheurer, Tomasz Korbak, Jon Ander Campos, Jun Shern Chan, Samuel R. Bowman, Kyunghyun Cho, Ethan Perez

Figure 1 for Improving Code Generation by Training with Natural Language Feedback
Figure 2 for Improving Code Generation by Training with Natural Language Feedback
Figure 3 for Improving Code Generation by Training with Natural Language Feedback
Figure 4 for Improving Code Generation by Training with Natural Language Feedback
Viaarxiv icon

EvoPrompting: Language Models for Code-Level Neural Architecture Search

Feb 28, 2023
Angelica Chen, David M. Dohan, David R. So

Figure 1 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 2 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 3 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Figure 4 for EvoPrompting: Language Models for Code-Level Neural Architecture Search
Viaarxiv icon

Pretraining Language Models with Human Preferences

Feb 16, 2023
Tomasz Korbak, Kejian Shi, Angelica Chen, Rasika Bhalerao, Christopher L. Buckley, Jason Phang, Samuel R. Bowman, Ethan Perez

Figure 1 for Pretraining Language Models with Human Preferences
Figure 2 for Pretraining Language Models with Human Preferences
Figure 3 for Pretraining Language Models with Human Preferences
Figure 4 for Pretraining Language Models with Human Preferences
Viaarxiv icon