Picture for Xiyang Dai

Xiyang Dai

Should All Proposals be Treated Equally in Object Detection?

Add code
Jul 07, 2022
Figure 1 for Should All Proposals be Treated Equally in Object Detection?
Figure 2 for Should All Proposals be Treated Equally in Object Detection?
Figure 3 for Should All Proposals be Treated Equally in Object Detection?
Figure 4 for Should All Proposals be Treated Equally in Object Detection?
Viaarxiv icon

GLIPv2: Unifying Localization and Vision-Language Understanding

Add code
Jun 12, 2022
Figure 1 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 2 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 3 for GLIPv2: Unifying Localization and Vision-Language Understanding
Figure 4 for GLIPv2: Unifying Localization and Vision-Language Understanding
Viaarxiv icon

Detection Hub: Unifying Object Detection Datasets via Query Adaptation on Language Embedding

Add code
Jun 07, 2022
Figure 1 for Detection Hub: Unifying Object Detection Datasets via Query Adaptation on Language Embedding
Figure 2 for Detection Hub: Unifying Object Detection Datasets via Query Adaptation on Language Embedding
Figure 3 for Detection Hub: Unifying Object Detection Datasets via Query Adaptation on Language Embedding
Figure 4 for Detection Hub: Unifying Object Detection Datasets via Query Adaptation on Language Embedding
Viaarxiv icon

Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

Add code
Jun 03, 2022
Figure 1 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 2 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 3 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Figure 4 for Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
Viaarxiv icon

Reduce Information Loss in Transformers for Pluralistic Image Inpainting

Add code
May 15, 2022
Figure 1 for Reduce Information Loss in Transformers for Pluralistic Image Inpainting
Figure 2 for Reduce Information Loss in Transformers for Pluralistic Image Inpainting
Figure 3 for Reduce Information Loss in Transformers for Pluralistic Image Inpainting
Figure 4 for Reduce Information Loss in Transformers for Pluralistic Image Inpainting
Viaarxiv icon

Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks

Add code
Apr 28, 2022
Figure 1 for Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Figure 2 for Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Figure 3 for Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Figure 4 for Multimodal Adaptive Distillation for Leveraging Unimodal Encoders for Vision-Language Tasks
Viaarxiv icon

Residual Mixture of Experts

Add code
Apr 20, 2022
Figure 1 for Residual Mixture of Experts
Figure 2 for Residual Mixture of Experts
Figure 3 for Residual Mixture of Experts
Figure 4 for Residual Mixture of Experts
Viaarxiv icon

CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks

Add code
Jan 15, 2022
Figure 1 for CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Figure 2 for CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Figure 3 for CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Figure 4 for CLIP-TD: CLIP Targeted Distillation for Vision-Language Tasks
Viaarxiv icon

RegionCLIP: Region-based Language-Image Pretraining

Add code
Dec 16, 2021
Figure 1 for RegionCLIP: Region-based Language-Image Pretraining
Figure 2 for RegionCLIP: Region-based Language-Image Pretraining
Figure 3 for RegionCLIP: Region-based Language-Image Pretraining
Figure 4 for RegionCLIP: Region-based Language-Image Pretraining
Viaarxiv icon

BEVT: BERT Pretraining of Video Transformers

Add code
Dec 02, 2021
Figure 1 for BEVT: BERT Pretraining of Video Transformers
Figure 2 for BEVT: BERT Pretraining of Video Transformers
Figure 3 for BEVT: BERT Pretraining of Video Transformers
Figure 4 for BEVT: BERT Pretraining of Video Transformers
Viaarxiv icon