Alert button
Picture for Yejin Bang

Yejin Bang

Alert button

Mitigating Framing Bias with Polarity Minimization Loss

Nov 03, 2023
Yejin Bang, Nayeon Lee, Pascale Fung

Viaarxiv icon

Survey of Social Bias in Vision-Language Models

Sep 24, 2023
Nayeon Lee, Yejin Bang, Holy Lovenia, Samuel Cahyawijaya, Wenliang Dai, Pascale Fung

Viaarxiv icon

Learn What NOT to Learn: Towards Generative Safety in Chatbots

Apr 25, 2023
Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung

Figure 1 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 2 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 3 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Figure 4 for Learn What NOT to Learn: Towards Generative Safety in Chatbots
Viaarxiv icon

A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity

Feb 28, 2023
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia, Ziwei Ji, Tiezheng Yu, Willy Chung, Quyet V. Do, Yan Xu, Pascale Fung

Figure 1 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 2 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 3 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Figure 4 for A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Viaarxiv icon

Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness

Nov 10, 2022
Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, Vítor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda Bogen, Pascale Fung, Cristian Canton Ferrer

Figure 1 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 2 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 3 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Figure 4 for Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness
Viaarxiv icon

Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values

Oct 14, 2022
Yejin Bang, Tiezheng Yu, Andrea Madotto, Zhaojiang Lin, Mona Diab, Pascale Fung

Figure 1 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 2 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 3 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Figure 4 for Enabling Classifiers to Make Judgements Explicitly Aligned with Human Values
Viaarxiv icon

AiSocrates: Towards Answering Ethical Quandary Questions

May 24, 2022
Yejin Bang, Nayeon Lee, Tiezheng Yu, Leila Khalatbari, Yan Xu, Dan Su, Elham J. Barezi, Andrea Madotto, Hayden Kee, Pascale Fung

Figure 1 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 2 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 3 for AiSocrates: Towards Answering Ethical Quandary Questions
Figure 4 for AiSocrates: Towards Answering Ethical Quandary Questions
Viaarxiv icon