Picture for Youcai Zhang

Youcai Zhang

Inject Semantic Concepts into Image Tagging for Open-Set Recognition

Add code
Oct 23, 2023
Figure 1 for Inject Semantic Concepts into Image Tagging for Open-Set Recognition
Figure 2 for Inject Semantic Concepts into Image Tagging for Open-Set Recognition
Figure 3 for Inject Semantic Concepts into Image Tagging for Open-Set Recognition
Figure 4 for Inject Semantic Concepts into Image Tagging for Open-Set Recognition
Viaarxiv icon

Recognize Anything: A Strong Image Tagging Model

Add code
Jun 09, 2023
Figure 1 for Recognize Anything: A Strong Image Tagging Model
Figure 2 for Recognize Anything: A Strong Image Tagging Model
Figure 3 for Recognize Anything: A Strong Image Tagging Model
Figure 4 for Recognize Anything: A Strong Image Tagging Model
Viaarxiv icon

Knowledge Distillation from Single to Multi Labels: an Empirical Study

Add code
Mar 15, 2023
Figure 1 for Knowledge Distillation from Single to Multi Labels: an Empirical Study
Figure 2 for Knowledge Distillation from Single to Multi Labels: an Empirical Study
Figure 3 for Knowledge Distillation from Single to Multi Labels: an Empirical Study
Figure 4 for Knowledge Distillation from Single to Multi Labels: an Empirical Study
Viaarxiv icon

Tag2Text: Guiding Vision-Language Model via Image Tagging

Add code
Mar 10, 2023
Figure 1 for Tag2Text: Guiding Vision-Language Model via Image Tagging
Figure 2 for Tag2Text: Guiding Vision-Language Model via Image Tagging
Figure 3 for Tag2Text: Guiding Vision-Language Model via Image Tagging
Figure 4 for Tag2Text: Guiding Vision-Language Model via Image Tagging
Viaarxiv icon

IDEA: Increasing Text Diversity via Online Multi-Label Recognition for Vision-Language Pre-training

Add code
Jul 12, 2022
Figure 1 for IDEA: Increasing Text Diversity via Online Multi-Label Recognition for Vision-Language Pre-training
Figure 2 for IDEA: Increasing Text Diversity via Online Multi-Label Recognition for Vision-Language Pre-training
Figure 3 for IDEA: Increasing Text Diversity via Online Multi-Label Recognition for Vision-Language Pre-training
Figure 4 for IDEA: Increasing Text Diversity via Online Multi-Label Recognition for Vision-Language Pre-training
Viaarxiv icon

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Add code
Dec 27, 2021
Figure 1 for Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Figure 2 for Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Figure 3 for Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Figure 4 for Simple and Robust Loss Design for Multi-Label Learning with Missing Labels
Viaarxiv icon

Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation

Add code
Sep 29, 2021
Figure 1 for Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation
Figure 2 for Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation
Figure 3 for Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation
Figure 4 for Federated Self-Supervised Contrastive Learning via Ensemble Similarity Distillation
Viaarxiv icon

On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

Add code
Jul 30, 2021
Figure 1 for On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals
Figure 2 for On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals
Figure 3 for On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals
Figure 4 for On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals
Viaarxiv icon

Prime-Aware Adaptive Distillation

Add code
Aug 04, 2020
Figure 1 for Prime-Aware Adaptive Distillation
Figure 2 for Prime-Aware Adaptive Distillation
Figure 3 for Prime-Aware Adaptive Distillation
Figure 4 for Prime-Aware Adaptive Distillation
Viaarxiv icon