Picture for Zefan Cai

Zefan Cai

LLM Critics Help Catch Bugs in Mathematics: Towards a Better Mathematical Verifier with Natural Language Feedback

Add code
Jun 30, 2024
Viaarxiv icon

The Reason behind Good or Bad: Towards a Better Mathematical Verifier with Natural Language Feedback

Add code
Jun 20, 2024
Viaarxiv icon

Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation

Add code
Apr 12, 2024
Figure 1 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 2 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 3 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 4 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Viaarxiv icon

Improving Event Definition Following For Zero-Shot Event Detection

Add code
Mar 05, 2024
Figure 1 for Improving Event Definition Following For Zero-Shot Event Detection
Figure 2 for Improving Event Definition Following For Zero-Shot Event Detection
Figure 3 for Improving Event Definition Following For Zero-Shot Event Detection
Figure 4 for Improving Event Definition Following For Zero-Shot Event Detection
Viaarxiv icon

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

Add code
Feb 21, 2024
Figure 1 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 2 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 3 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 4 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Viaarxiv icon

VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness

Add code
Jan 15, 2024
Figure 1 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 2 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 3 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 4 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Viaarxiv icon

ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks

Add code
Nov 16, 2023
Figure 1 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 2 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 3 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 4 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Viaarxiv icon

Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning

Add code
Nov 14, 2023
Figure 1 for Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning
Figure 2 for Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning
Figure 3 for Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning
Figure 4 for Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning
Viaarxiv icon

Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond

Add code
Oct 16, 2023
Figure 1 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 2 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 3 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 4 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Viaarxiv icon

MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning

Add code
Oct 02, 2023
Figure 1 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 2 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 3 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Figure 4 for MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning
Viaarxiv icon