Picture for Shanghang Zhang

Shanghang Zhang

Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective

Add code
Oct 29, 2024
Figure 1 for Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective
Figure 2 for Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective
Figure 3 for Towards Unifying Understanding and Generation in the Era of Vision Foundation Models: A Survey from the Autoregression Perspective
Viaarxiv icon

EVA: An Embodied World Model for Future Video Anticipation

Add code
Oct 20, 2024
Figure 1 for EVA: An Embodied World Model for Future Video Anticipation
Figure 2 for EVA: An Embodied World Model for Future Video Anticipation
Figure 3 for EVA: An Embodied World Model for Future Video Anticipation
Figure 4 for EVA: An Embodied World Model for Future Video Anticipation
Viaarxiv icon

SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference

Add code
Oct 06, 2024
Figure 1 for SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Figure 2 for SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Figure 3 for SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Figure 4 for SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference
Viaarxiv icon

Expert-level vision-language foundation model for real-world radiology and comprehensive evaluation

Add code
Sep 24, 2024
Viaarxiv icon

FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models

Add code
Aug 15, 2024
Figure 1 for FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models
Figure 2 for FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models
Figure 3 for FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models
Figure 4 for FactorLLM: Factorizing Knowledge via Mixture of Experts for Large Language Models
Viaarxiv icon

MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions

Add code
Jul 30, 2024
Figure 1 for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions
Figure 2 for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions
Figure 3 for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions
Figure 4 for MMTrail: A Multimodal Trailer Video Dataset with Language and Music Descriptions
Viaarxiv icon

Multimodal Large Language Models for Bioimage Analysis

Add code
Jul 29, 2024
Viaarxiv icon

MAVIS: Mathematical Visual Instruction Tuning

Add code
Jul 11, 2024
Figure 1 for MAVIS: Mathematical Visual Instruction Tuning
Figure 2 for MAVIS: Mathematical Visual Instruction Tuning
Figure 3 for MAVIS: Mathematical Visual Instruction Tuning
Figure 4 for MAVIS: Mathematical Visual Instruction Tuning
Viaarxiv icon

Fisher-aware Quantization for DETR Detectors with Critical-category Objectives

Add code
Jul 03, 2024
Figure 1 for Fisher-aware Quantization for DETR Detectors with Critical-category Objectives
Figure 2 for Fisher-aware Quantization for DETR Detectors with Critical-category Objectives
Figure 3 for Fisher-aware Quantization for DETR Detectors with Critical-category Objectives
Figure 4 for Fisher-aware Quantization for DETR Detectors with Critical-category Objectives
Viaarxiv icon

MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception

Add code
Jun 22, 2024
Figure 1 for MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception
Figure 2 for MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception
Figure 3 for MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception
Figure 4 for MR-MLLM: Mutual Reinforcement of Multimodal Comprehension and Vision Perception
Viaarxiv icon