Picture for Huaxiu Yao

Huaxiu Yao

RULE: Reliable Multimodal RAG for Factuality in Medical Vision Language Models

Add code
Jul 06, 2024
Viaarxiv icon

MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?

Add code
Jul 05, 2024
Viaarxiv icon

CAT: Interpretable Concept-based Taylor Additive Models

Add code
Jun 27, 2024
Viaarxiv icon

It Takes Two: On the Seamlessness between Reward and Policy Model in RLHF

Add code
Jun 12, 2024
Viaarxiv icon

CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models

Add code
Jun 10, 2024
Viaarxiv icon

Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement

Add code
May 29, 2024
Viaarxiv icon

Calibrated Self-Rewarding Vision Language Models

Add code
May 23, 2024
Figure 1 for Calibrated Self-Rewarding Vision Language Models
Figure 2 for Calibrated Self-Rewarding Vision Language Models
Figure 3 for Calibrated Self-Rewarding Vision Language Models
Figure 4 for Calibrated Self-Rewarding Vision Language Models
Viaarxiv icon

LITE: Modeling Environmental Ecosystems with Multimodal Large Language Models

Add code
Apr 01, 2024
Viaarxiv icon

Electrocardiogram Instruction Tuning for Report Generation

Add code
Mar 13, 2024
Figure 1 for Electrocardiogram Instruction Tuning for Report Generation
Figure 2 for Electrocardiogram Instruction Tuning for Report Generation
Figure 3 for Electrocardiogram Instruction Tuning for Report Generation
Figure 4 for Electrocardiogram Instruction Tuning for Report Generation
Viaarxiv icon

HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding

Add code
Mar 01, 2024
Figure 1 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 2 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 3 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Figure 4 for HALC: Object Hallucination Reduction via Adaptive Focal-Contrast Decoding
Viaarxiv icon