Alert button
Picture for Zongxia Li

Zongxia Li

Alert button

PANDA (Pedantic ANswer-correctness Determination and Adjudication):Improving Automatic Evaluation for Question Answering and Text Generation

Add code
Bookmark button
Alert button
Feb 17, 2024
Zongxia Li, Ishani Mondal, Yijun Liang, Huy Nghiem, Jordan Lee Boyd-Graber

Viaarxiv icon

Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks

Add code
Bookmark button
Alert button
Jan 29, 2024
Zongxia Li, Andrew Mao, Daniel Stephens, Pranav Goel, Emily Walpole, Alden Dima, Juan Fung, Jordan Boyd-Graber

Viaarxiv icon

CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering

Add code
Bookmark button
Alert button
Jan 24, 2024
Zongxia Li, Ishani Mondal, Yijun Liang, Huy Nghiem, Jordan Boyd-Graber

Viaarxiv icon

HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models

Add code
Bookmark button
Alert button
Oct 23, 2023
Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, Tianyi Zhou

Figure 1 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 2 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 3 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Figure 4 for HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models
Viaarxiv icon

Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps

Add code
Bookmark button
Alert button
Jul 11, 2023
Zongxia Li, Paiheng Xu, Fuxiao Liu, Hyemi Song

Figure 1 for Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps
Figure 2 for Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps
Figure 3 for Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps
Figure 4 for Towards Understanding In-Context Learning with Contrastive Demonstrations and Saliency Maps
Viaarxiv icon

SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models

Add code
Bookmark button
Alert button
Oct 13, 2022
Haozhe An, Zongxia Li, Jieyu Zhao, Rachel Rudinger

Figure 1 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 2 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 3 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Figure 4 for SODAPOP: Open-Ended Discovery of Social Biases in Social Commonsense Reasoning Models
Viaarxiv icon