Picture for Shuyang Sun

Shuyang Sun

kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies

Add code
Apr 15, 2024
Figure 1 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 2 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 3 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Figure 4 for kNN-CLIP: Retrieval Enables Training-Free Segmentation on Continually Expanding Large Vocabularies
Viaarxiv icon

SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model

Add code
Mar 05, 2024
Figure 1 for SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model
Figure 2 for SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model
Figure 3 for SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model
Figure 4 for SynArtifact: Classifying and Alleviating Artifacts in Synthetic Images via Vision-Language Model
Viaarxiv icon

RAG-Driver: Generalisable Driving Explanations with Retrieval-Augmented In-Context Learning in Multi-Modal Large Language Model

Add code
Feb 16, 2024
Viaarxiv icon

CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor

Add code
Dec 21, 2023
Figure 1 for CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
Figure 2 for CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
Figure 3 for CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
Figure 4 for CLIP as RNN: Segment Countless Visual Concepts without Training Endeavor
Viaarxiv icon

Real-Fake: Effective Training Data Synthesis Through Distribution Matching

Add code
Oct 16, 2023
Figure 1 for Real-Fake: Effective Training Data Synthesis Through Distribution Matching
Figure 2 for Real-Fake: Effective Training Data Synthesis Through Distribution Matching
Figure 3 for Real-Fake: Effective Training Data Synthesis Through Distribution Matching
Figure 4 for Real-Fake: Effective Training Data Synthesis Through Distribution Matching
Viaarxiv icon

OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?

Add code
Jul 21, 2023
Figure 1 for OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?
Figure 2 for OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?
Figure 3 for OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?
Figure 4 for OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?
Viaarxiv icon

ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation

Add code
Jun 29, 2023
Figure 1 for ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation
Figure 2 for ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation
Figure 3 for ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation
Figure 4 for ReMaX: Relaxing for Better Training on Efficient Panoptic Segmentation
Viaarxiv icon

LUMix: Improving Mixup by Better Modelling Label Uncertainty

Add code
Nov 29, 2022
Figure 1 for LUMix: Improving Mixup by Better Modelling Label Uncertainty
Figure 2 for LUMix: Improving Mixup by Better Modelling Label Uncertainty
Figure 3 for LUMix: Improving Mixup by Better Modelling Label Uncertainty
Figure 4 for LUMix: Improving Mixup by Better Modelling Label Uncertainty
Viaarxiv icon

Is synthetic data from generative models ready for image recognition?

Add code
Oct 14, 2022
Figure 1 for Is synthetic data from generative models ready for image recognition?
Figure 2 for Is synthetic data from generative models ready for image recognition?
Figure 3 for Is synthetic data from generative models ready for image recognition?
Figure 4 for Is synthetic data from generative models ready for image recognition?
Viaarxiv icon

Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability

Add code
Mar 26, 2022
Figure 1 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 2 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 3 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Figure 4 for Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-efficiency, and Better Transferability
Viaarxiv icon