Picture for Diyi Yang

Diyi Yang

Stanford University

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

Add code
Apr 01, 2024
Viaarxiv icon

Mapping the Increasing Use of LLMs in Scientific Papers

Add code
Apr 01, 2024
Figure 1 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 2 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 3 for Mapping the Increasing Use of LLMs in Scientific Papers
Figure 4 for Mapping the Increasing Use of LLMs in Scientific Papers
Viaarxiv icon

Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors

Add code
Mar 21, 2024
Viaarxiv icon

A Safe Harbor for AI Evaluation and Red Teaming

Add code
Mar 07, 2024
Figure 1 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 2 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 3 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 4 for A Safe Harbor for AI Evaluation and Red Teaming
Viaarxiv icon

Design2Code: How Far Are We From Automating Front-End Engineering?

Add code
Mar 05, 2024
Figure 1 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 2 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 3 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 4 for Design2Code: How Far Are We From Automating Front-End Engineering?
Viaarxiv icon

Unintended Impacts of LLM Alignment on Global Representation

Add code
Feb 22, 2024
Figure 1 for Unintended Impacts of LLM Alignment on Global Representation
Figure 2 for Unintended Impacts of LLM Alignment on Global Representation
Figure 3 for Unintended Impacts of LLM Alignment on Global Representation
Figure 4 for Unintended Impacts of LLM Alignment on Global Representation
Viaarxiv icon

How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs

Add code
Jan 23, 2024
Figure 1 for How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
Figure 2 for How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
Figure 3 for How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
Figure 4 for How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs
Viaarxiv icon

From Scroll to Misbelief: Modeling the Unobservable Susceptibility to Misinformation on Social Media

Add code
Nov 16, 2023
Viaarxiv icon

Grounding or Guesswork? Large Language Models are Presumptive Grounders

Add code
Nov 15, 2023
Viaarxiv icon

A Material Lens on Coloniality in NLP

Add code
Nov 14, 2023
Viaarxiv icon