Picture for Zhongyu Wei

Zhongyu Wei

Fudan University

MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration

Add code
Oct 06, 2024
Figure 1 for MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration
Figure 2 for MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration
Figure 3 for MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration
Figure 4 for MC-CoT: A Modular Collaborative CoT Framework for Zero-shot Medical-VQA with LLM and MLLM Integration
Viaarxiv icon

Symbolic Working Memory Enhances Language Models for Complex Rule Application

Add code
Aug 24, 2024
Figure 1 for Symbolic Working Memory Enhances Language Models for Complex Rule Application
Figure 2 for Symbolic Working Memory Enhances Language Models for Complex Rule Application
Figure 3 for Symbolic Working Memory Enhances Language Models for Complex Rule Application
Figure 4 for Symbolic Working Memory Enhances Language Models for Complex Rule Application
Viaarxiv icon

Identity-Driven Hierarchical Role-Playing Agents

Add code
Jul 28, 2024
Viaarxiv icon

Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks

Add code
Jul 24, 2024
Figure 1 for Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
Figure 2 for Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
Figure 3 for Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
Figure 4 for Overview of AI-Debater 2023: The Challenges of Argument Generation Tasks
Viaarxiv icon

Synergistic Multi-Agent Framework with Trajectory Learning for Knowledge-Intensive Tasks

Add code
Jul 13, 2024
Viaarxiv icon

HAF-RM: A Hybrid Alignment Framework for Reward Model Training

Add code
Jul 04, 2024
Viaarxiv icon

From LLMs to MLLMs: Exploring the Landscape of Multimodal Jailbreaking

Add code
Jun 21, 2024
Viaarxiv icon

Overview of the CAIL 2023 Argument Mining Track

Add code
Jun 20, 2024
Figure 1 for Overview of the CAIL 2023 Argument Mining Track
Figure 2 for Overview of the CAIL 2023 Argument Mining Track
Figure 3 for Overview of the CAIL 2023 Argument Mining Track
Figure 4 for Overview of the CAIL 2023 Argument Mining Track
Viaarxiv icon

EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models

Add code
Jun 09, 2024
Figure 1 for EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models
Figure 2 for EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models
Figure 3 for EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models
Figure 4 for EmbSpatial-Bench: Benchmarking Spatial Understanding for Embodied Tasks with Large Vision-Language Models
Viaarxiv icon

VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models

Add code
May 28, 2024
Figure 1 for VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models
Figure 2 for VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models
Figure 3 for VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models
Figure 4 for VoCoT: Unleashing Visually Grounded Multi-Step Reasoning in Large Multi-Modal Models
Viaarxiv icon