Alert button
Picture for Yejin Bang

Yejin Bang

Alert button

High-Dimension Human Value Representation in Large Language Models

Add code
Bookmark button
Alert button
Apr 11, 2024
Samuel Cahyawijaya, Delong Chen, Yejin Bang, Leila Khalatbari, Bryan Wilie, Ziwei Ji, Etsuko Ishii, Pascale Fung

Viaarxiv icon

Measuring Political Bias in Large Language Models: What Is Said and How It Is Said

Add code
Bookmark button
Alert button
Mar 27, 2024
Yejin Bang, Delong Chen, Nayeon Lee, Pascale Fung

Viaarxiv icon

Mitigating Framing Bias with Polarity Minimization Loss

Add code
Bookmark button
Alert button
Nov 03, 2023
Yejin Bang, Nayeon Lee, Pascale Fung

Viaarxiv icon

Survey of Social Bias in Vision-Language Models

Add code
Bookmark button
Alert button
Sep 24, 2023
Nayeon Lee, Yejin Bang, Holy Lovenia, Samuel Cahyawijaya, Wenliang Dai, Pascale Fung

Viaarxiv icon

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Add code
Bookmark button
Alert button
Apr 25, 2023
Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung

Figure 1 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 2 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 3 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 4 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Viaarxiv icon

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

Add code
Bookmark button
Alert button
Feb 28, 2023
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung

Figure 1 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 2 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 3 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 4 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Add code
Bookmark button
Alert button
Nov 10, 2022
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, Vítor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda Bogen, Pascale Fung, Cristian Canton Ferrer

Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Add code
Bookmark button
Alert button
Oct 14, 2022
Yejin Bang, Tiezheng Yu, Andrea Madotto, Zhaojiang Lin, Mona Diab, Pascale Fung

Figure 1 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 2 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 3 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 4 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Viaarxiv icon