Alert button
Picture for Taiwei Shi

Taiwei Shi

Alert button

How Susceptible are Large Language Models to Ideological Manipulation?

Add code
Bookmark button
Alert button
Feb 22, 2024
Kai Chen, Zihao He, Jun Yan, Taiwei Shi, Kristina Lerman

Viaarxiv icon

Can Language Model Moderators Improve the Health of Online Discourse?

Add code
Bookmark button
Alert button
Nov 16, 2023
Hyundong Cho, Shuai Liu, Taiwei Shi, Darpan Jain, Basem Rizk, Yuyang Huang, Zixun Lu, Nuan Wen, Jonathan Gratch, Emilio Ferrara, Jonathan May

Viaarxiv icon

Safer-Instruct: Aligning Language Models with Automated Preference Data

Add code
Bookmark button
Alert button
Nov 15, 2023
Taiwei Shi, Kai Chen, Jieyu Zhao

Viaarxiv icon

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

Add code
Bookmark button
Alert button
Oct 24, 2023
Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy F. Chen, Zhengyuan Liu, Diyi Yang

Figure 1 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 2 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 3 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 4 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Viaarxiv icon

Neural Story Planning

Add code
Bookmark button
Alert button
Dec 16, 2022
Anbang Ye, Christopher Cui, Taiwei Shi, Mark O. Riedl

Figure 1 for Neural Story Planning
Figure 2 for Neural Story Planning
Figure 3 for Neural Story Planning
Figure 4 for Neural Story Planning
Viaarxiv icon