Picture for Yejin Choi

Yejin Choi

SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation

Add code
Oct 22, 2024
Figure 1 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 2 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 3 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Figure 4 for SafetyAnalyst: Interpretable, transparent, and steerable LLM safety moderation
Viaarxiv icon

Diverging Preferences: When do Annotators Disagree and do Models Know?

Add code
Oct 18, 2024
Figure 1 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 2 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 3 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Figure 4 for Diverging Preferences: When do Annotators Disagree and do Models Know?
Viaarxiv icon

SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs

Add code
Oct 17, 2024
Figure 1 for SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
Figure 2 for SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
Figure 3 for SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
Figure 4 for SimpleToM: Exposing the Gap between Explicit ToM Inference and Implicit ToM Application in LLMs
Viaarxiv icon

Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence

Add code
Oct 15, 2024
Figure 1 for Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Figure 2 for Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Figure 3 for Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Figure 4 for Model Swarms: Collaborative Search to Adapt LLM Experts via Swarm Intelligence
Viaarxiv icon

Biased AI can Influence Political Decision-Making

Add code
Oct 08, 2024
Figure 1 for Biased AI can Influence Political Decision-Making
Figure 2 for Biased AI can Influence Political Decision-Making
Figure 3 for Biased AI can Influence Political Decision-Making
Figure 4 for Biased AI can Influence Political Decision-Making
Viaarxiv icon

ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition

Add code
Oct 08, 2024
Figure 1 for ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition
Figure 2 for ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition
Figure 3 for ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition
Figure 4 for ActionAtlas: A VideoQA Benchmark for Domain-specialized Action Recognition
Viaarxiv icon

Intuitions of Compromise: Utilitarianism vs. Contractualism

Add code
Oct 07, 2024
Viaarxiv icon

AI as Humanity's Salieri: Quantifying Linguistic Creativity of Language Models via Systematic Attribution of Machine Text against Web Text

Add code
Oct 05, 2024
Viaarxiv icon

Can Language Models Reason about Individualistic Human Values and Preferences?

Add code
Oct 04, 2024
Viaarxiv icon

CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs

Add code
Oct 03, 2024
Figure 1 for CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
Figure 2 for CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
Figure 3 for CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
Figure 4 for CulturalBench: a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
Viaarxiv icon