Picture for Haoyi Qiu

Haoyi Qiu

VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models

Add code
Apr 22, 2024
Figure 1 for VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Figure 2 for VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Figure 3 for VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Figure 4 for VALOR-EVAL: Holistic Coverage and Faithfulness Evaluation of Large Vision-Language Models
Viaarxiv icon

From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models

Add code
Mar 25, 2024
Figure 1 for From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models
Figure 2 for From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models
Figure 3 for From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models
Figure 4 for From Pixels to Insights: A Survey on Automatic Chart Understanding in the Era of Large Foundation Models
Viaarxiv icon

New Job, New Gender? Measuring the Social Bias in Image Generation Models

Add code
Jan 01, 2024
Figure 1 for New Job, New Gender? Measuring the Social Bias in Image Generation Models
Figure 2 for New Job, New Gender? Measuring the Social Bias in Image Generation Models
Figure 3 for New Job, New Gender? Measuring the Social Bias in Image Generation Models
Figure 4 for New Job, New Gender? Measuring the Social Bias in Image Generation Models
Viaarxiv icon

AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation

Add code
Nov 16, 2023
Figure 1 for AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation
Figure 2 for AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation
Figure 3 for AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation
Figure 4 for AMRFact: Enhancing Summarization Factuality Evaluation with AMR-driven Training Data Generation
Viaarxiv icon

Gender Biases in Automatic Evaluation Metrics: A Case Study on Image Captioning

Add code
May 24, 2023
Figure 1 for Gender Biases in Automatic Evaluation Metrics: A Case Study on Image Captioning
Figure 2 for Gender Biases in Automatic Evaluation Metrics: A Case Study on Image Captioning
Figure 3 for Gender Biases in Automatic Evaluation Metrics: A Case Study on Image Captioning
Figure 4 for Gender Biases in Automatic Evaluation Metrics: A Case Study on Image Captioning
Viaarxiv icon