Alert button
Picture for Diyi Yang

Diyi Yang

Alert button

Best Practices and Lessons Learned on Synthetic Data for Language Models

Add code
Bookmark button
Alert button
Apr 11, 2024
Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi Peng, Diyi Yang, Denny Zhou, Andrew M. Dai

Viaarxiv icon

Social Skill Training with Large Language Models

Add code
Bookmark button
Alert button
Apr 05, 2024
Diyi Yang, Caleb Ziems, William Held, Omar Shaikh, Michael S. Bernstein, John Mitchell

Viaarxiv icon

Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data

Add code
Bookmark button
Alert button
Apr 01, 2024
Matthias Gerstgrasser, Rylan Schaeffer, Apratim Dey, Rafael Rafailov, Henry Sleight, John Hughes, Tomasz Korbak, Rajashree Agrawal, Dhruv Pai, Andrey Gromov, Daniel A. Roberts, Diyi Yang, David L. Donoho, Sanmi Koyejo

Viaarxiv icon

Mapping the Increasing Use of LLMs in Scientific Papers

Add code
Bookmark button
Alert button
Apr 01, 2024
Weixin Liang, Yaohui Zhang, Zhengxuan Wu, Haley Lepp, Wenlong Ji, Xuandong Zhao, Hancheng Cao, Sheng Liu, Siyu He, Zhi Huang, Diyi Yang, Christopher Potts, Christopher D Manning, James Y. Zou

Viaarxiv icon

Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors

Add code
Bookmark button
Alert button
Mar 21, 2024
Alicja Chaszczewicz, Raj Sanjay Shah, Ryan Louie, Bruce A Arnow, Robert Kraut, Diyi Yang

Viaarxiv icon

A Safe Harbor for AI Evaluation and Red Teaming

Add code
Bookmark button
Alert button
Mar 07, 2024
Shayne Longpre, Sayash Kapoor, Kevin Klyman, Ashwin Ramaswami, Rishi Bommasani, Borhane Blili-Hamelin, Yangsibo Huang, Aviya Skowron, Zheng-Xin Yong, Suhas Kotha, Yi Zeng, Weiyan Shi, Xianjun Yang, Reid Southen, Alexander Robey, Patrick Chao, Diyi Yang, Ruoxi Jia, Daniel Kang, Sandy Pentland, Arvind Narayanan, Percy Liang, Peter Henderson

Figure 1 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 2 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 3 for A Safe Harbor for AI Evaluation and Red Teaming
Figure 4 for A Safe Harbor for AI Evaluation and Red Teaming
Viaarxiv icon

Design2Code: How Far Are We From Automating Front-End Engineering?

Add code
Bookmark button
Alert button
Mar 05, 2024
Chenglei Si, Yanzhe Zhang, Zhengyuan Yang, Ruibo Liu, Diyi Yang

Figure 1 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 2 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 3 for Design2Code: How Far Are We From Automating Front-End Engineering?
Figure 4 for Design2Code: How Far Are We From Automating Front-End Engineering?
Viaarxiv icon

Unintended Impacts of LLM Alignment on Global Representation

Add code
Bookmark button
Alert button
Feb 22, 2024
Michael J. Ryan, William Held, Diyi Yang

Viaarxiv icon

How Johnny Can Persuade LLMs to Jailbreak Them: Rethinking Persuasion to Challenge AI Safety by Humanizing LLMs

Add code
Bookmark button
Alert button
Jan 23, 2024
Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, Weiyan Shi

Viaarxiv icon