Picture for Yichang Zhang

Yichang Zhang

additional authors not shown

Qwen Technical Report

Add code
Sep 28, 2023
Figure 1 for Qwen Technical Report
Figure 2 for Qwen Technical Report
Figure 3 for Qwen Technical Report
Figure 4 for Qwen Technical Report
Viaarxiv icon

Transferring General Multimodal Pretrained Models to Text Recognition

Add code
Dec 19, 2022
Viaarxiv icon

OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models

Add code
Dec 08, 2022
Figure 1 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 2 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 3 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Figure 4 for OFASys: A Multi-Modal Multi-Task Learning System for Building Generalist Models
Viaarxiv icon

Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese

Add code
Nov 03, 2022
Figure 1 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 2 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 3 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Figure 4 for Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese
Viaarxiv icon

Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation

Add code
May 31, 2021
Figure 1 for Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
Figure 2 for Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
Figure 3 for Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
Figure 4 for Sketch and Refine: Towards Faithful and Informative Table-to-Text Generation
Viaarxiv icon

M6: A Chinese Multimodal Pretrainer

Add code
Mar 02, 2021
Figure 1 for M6: A Chinese Multimodal Pretrainer
Figure 2 for M6: A Chinese Multimodal Pretrainer
Figure 3 for M6: A Chinese Multimodal Pretrainer
Figure 4 for M6: A Chinese Multimodal Pretrainer
Viaarxiv icon

Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains

Add code
Dec 02, 2020
Figure 1 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 2 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 3 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Figure 4 for Meta-KD: A Meta Knowledge Distillation Framework for Language Model Compression across Domains
Viaarxiv icon

Graph-based Multi-hop Reasoning for Long Text Generation

Add code
Sep 28, 2020
Figure 1 for Graph-based Multi-hop Reasoning for Long Text Generation
Figure 2 for Graph-based Multi-hop Reasoning for Long Text Generation
Figure 3 for Graph-based Multi-hop Reasoning for Long Text Generation
Figure 4 for Graph-based Multi-hop Reasoning for Long Text Generation
Viaarxiv icon

InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining

Add code
Mar 30, 2020
Figure 1 for InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining
Figure 2 for InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining
Figure 3 for InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining
Figure 4 for InterBERT: Vision-and-Language Interaction for Multi-modal Pretraining
Viaarxiv icon

Towards Knowledge-Based Recommender Dialog System

Add code
Sep 03, 2019
Figure 1 for Towards Knowledge-Based Recommender Dialog System
Figure 2 for Towards Knowledge-Based Recommender Dialog System
Figure 3 for Towards Knowledge-Based Recommender Dialog System
Figure 4 for Towards Knowledge-Based Recommender Dialog System
Viaarxiv icon