Alert button
Picture for Nanyun Peng

Nanyun Peng

Alert button

Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems

Add code
Bookmark button
Alert button
Oct 23, 2023
Yixin Wan, Jieyu Zhao, Aman Chadha, Nanyun Peng, Kai-Wei Chang

Figure 1 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 2 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 3 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Figure 4 for Are Personalized Stochastic Parrots More Dangerous? Evaluating Persona Biases in Dialogue Systems
Viaarxiv icon

Localizing Active Objects from Egocentric Vision with Symbolic World Knowledge

Add code
Bookmark button
Alert button
Oct 23, 2023
Te-Lin Wu, Yu Zhou, Nanyun Peng

Viaarxiv icon

Evaluating Large Language Models on Controlled Generation Tasks

Add code
Bookmark button
Alert button
Oct 23, 2023
Jiao Sun, Yufei Tian, Wangchunshu Zhou, Nan Xu, Qian Hu, Rahul Gupta, John Frederick Wieting, Nanyun Peng, Xuezhe Ma

Viaarxiv icon

"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters

Add code
Bookmark button
Alert button
Oct 13, 2023
Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng

Figure 1 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 2 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 3 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Figure 4 for "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Viaarxiv icon

Mitigating Bias for Question Answering Models by Tracking Bias Influence

Add code
Bookmark button
Alert button
Oct 13, 2023
Mingyu Derek Ma, Jiun-Yu Kao, Arpit Gupta, Yu-Hsiang Lin, Wenbo Zhao, Tagyoung Chung, Wei Wang, Kai-Wei Chang, Nanyun Peng

Figure 1 for Mitigating Bias for Question Answering Models by Tracking Bias Influence
Figure 2 for Mitigating Bias for Question Answering Models by Tracking Bias Influence
Figure 3 for Mitigating Bias for Question Answering Models by Tracking Bias Influence
Figure 4 for Mitigating Bias for Question Answering Models by Tracking Bias Influence
Viaarxiv icon

MIDDAG: Where Does Our News Go? Investigating Information Diffusion via Community-Level Information Pathways

Add code
Bookmark button
Alert button
Oct 04, 2023
Mingyu Derek Ma, Alexander K. Taylor, Nuan Wen, Yanchen Liu, Po-Nien Kung, Wenna Qin, Shicheng Wen, Azure Zhou, Diyi Yang, Xuezhe Ma, Nanyun Peng, Wei Wang

Viaarxiv icon

Contextual Label Projection for Cross-Lingual Structure Extraction

Add code
Bookmark button
Alert button
Sep 16, 2023
Tanmay Parekh, I-Hung Hsu, Kuan-Hao Huang, Kai-Wei Chang, Nanyun Peng

Figure 1 for Contextual Label Projection for Cross-Lingual Structure Extraction
Figure 2 for Contextual Label Projection for Cross-Lingual Structure Extraction
Figure 3 for Contextual Label Projection for Cross-Lingual Structure Extraction
Figure 4 for Contextual Label Projection for Cross-Lingual Structure Extraction
Viaarxiv icon

RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment

Add code
Bookmark button
Alert button
Jul 24, 2023
Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian

Figure 1 for RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
Figure 2 for RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
Figure 3 for RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
Figure 4 for RLCD: Reinforcement Learning from Contrast Distillation for Language Model Alignment
Viaarxiv icon