Picture for Tomasz Korbak

Tomasz Korbak

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Add code
Apr 15, 2024
Viaarxiv icon

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

Add code
Apr 01, 2024
Viaarxiv icon

Towards Understanding Sycophancy in Language Models

Add code
Oct 27, 2023
Figure 1 for Towards Understanding Sycophancy in Language Models
Figure 2 for Towards Understanding Sycophancy in Language Models
Figure 3 for Towards Understanding Sycophancy in Language Models
Figure 4 for Towards Understanding Sycophancy in Language Models
Viaarxiv icon

Compositional preference models for aligning LMs

Add code
Oct 17, 2023
Viaarxiv icon

The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"

Add code
Sep 22, 2023
Figure 1 for The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
Figure 2 for The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
Figure 3 for The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
Figure 4 for The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A"
Viaarxiv icon

Taken out of context: On measuring situational awareness in LLMs

Add code
Sep 01, 2023
Figure 1 for Taken out of context: On measuring situational awareness in LLMs
Figure 2 for Taken out of context: On measuring situational awareness in LLMs
Figure 3 for Taken out of context: On measuring situational awareness in LLMs
Figure 4 for Taken out of context: On measuring situational awareness in LLMs
Viaarxiv icon

Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback

Add code
Jul 27, 2023
Figure 1 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 2 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 3 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Figure 4 for Open Problems and Fundamental Limitations of Reinforcement Learning from Human Feedback
Viaarxiv icon

Inverse Scaling: When Bigger Isn't Better

Add code
Jun 15, 2023
Figure 1 for Inverse Scaling: When Bigger Isn't Better
Figure 2 for Inverse Scaling: When Bigger Isn't Better
Figure 3 for Inverse Scaling: When Bigger Isn't Better
Figure 4 for Inverse Scaling: When Bigger Isn't Better
Viaarxiv icon

Training Language Models with Language Feedback at Scale

Add code
Apr 09, 2023
Figure 1 for Training Language Models with Language Feedback at Scale
Figure 2 for Training Language Models with Language Feedback at Scale
Figure 3 for Training Language Models with Language Feedback at Scale
Figure 4 for Training Language Models with Language Feedback at Scale
Viaarxiv icon

Improving Code Generation by Training with Natural Language Feedback

Add code
Mar 28, 2023
Figure 1 for Improving Code Generation by Training with Natural Language Feedback
Figure 2 for Improving Code Generation by Training with Natural Language Feedback
Figure 3 for Improving Code Generation by Training with Natural Language Feedback
Figure 4 for Improving Code Generation by Training with Natural Language Feedback
Viaarxiv icon