Picture for Taiwei Shi

Taiwei Shi

How Susceptible are Large Language Models to Ideological Manipulation?

Add code
Feb 22, 2024
Viaarxiv icon

Can Language Model Moderators Improve the Health of Online Discourse?

Add code
Nov 16, 2023
Figure 1 for Can Language Model Moderators Improve the Health of Online Discourse?
Figure 2 for Can Language Model Moderators Improve the Health of Online Discourse?
Figure 3 for Can Language Model Moderators Improve the Health of Online Discourse?
Figure 4 for Can Language Model Moderators Improve the Health of Online Discourse?
Viaarxiv icon

Safer-Instruct: Aligning Language Models with Automated Preference Data

Add code
Nov 15, 2023
Viaarxiv icon

CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation

Add code
Oct 24, 2023
Figure 1 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 2 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 3 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Figure 4 for CoAnnotating: Uncertainty-Guided Work Allocation between Human and Large Language Models for Data Annotation
Viaarxiv icon

Neural Story Planning

Add code
Dec 16, 2022
Figure 1 for Neural Story Planning
Figure 2 for Neural Story Planning
Figure 3 for Neural Story Planning
Figure 4 for Neural Story Planning
Viaarxiv icon