Picture for Yuqi Lin

Yuqi Lin

MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI

Add code
Apr 24, 2024
Figure 1 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 2 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 3 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Figure 4 for MMT-Bench: A Comprehensive Multimodal Benchmark for Evaluating Large Vision-Language Models Towards Multitask AGI
Viaarxiv icon

UniHDA: Towards Universal Hybrid Domain Adaptation of Image Generators

Add code
Jan 23, 2024
Figure 1 for UniHDA: Towards Universal Hybrid Domain Adaptation of Image Generators
Figure 2 for UniHDA: Towards Universal Hybrid Domain Adaptation of Image Generators
Viaarxiv icon

TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training

Add code
Dec 20, 2023
Figure 1 for TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
Figure 2 for TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
Figure 3 for TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
Figure 4 for TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training
Viaarxiv icon

Few-shot Hybrid Domain Adaptation of Image Generators

Add code
Oct 30, 2023
Figure 1 for Few-shot Hybrid Domain Adaptation of Image Generators
Figure 2 for Few-shot Hybrid Domain Adaptation of Image Generators
Figure 3 for Few-shot Hybrid Domain Adaptation of Image Generators
Figure 4 for Few-shot Hybrid Domain Adaptation of Image Generators
Viaarxiv icon

Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations

Add code
Dec 23, 2022
Figure 1 for Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations
Figure 2 for Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations
Figure 3 for Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations
Figure 4 for Self-supervised and Weakly Supervised Contrastive Learning for Frame-wise Action Representations
Viaarxiv icon

CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation

Add code
Dec 20, 2022
Figure 1 for CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation
Figure 2 for CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation
Figure 3 for CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation
Figure 4 for CLIP is Also an Efficient Segmenter: A Text-Driven Approach for Weakly Supervised Semantic Segmentation
Viaarxiv icon