Picture for Xin Jiang

Xin Jiang

Harbin Institute of Technology, Shenzhen

Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding

Add code
May 21, 2022
Figure 1 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 2 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 3 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Figure 4 for Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
Viaarxiv icon

Exploring Extreme Parameter Compression for Pre-trained Language Models

Add code
May 20, 2022
Figure 1 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 2 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 3 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Figure 4 for Exploring Extreme Parameter Compression for Pre-trained Language Models
Viaarxiv icon

UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog

Add code
May 03, 2022
Figure 1 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 2 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 3 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Figure 4 for UTC: A Unified Transformer with Inter-Task Contrastive Learning for Visual Dialog
Viaarxiv icon

Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering

Add code
Apr 12, 2022
Figure 1 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 2 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 3 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Figure 4 for Hyperlink-induced Pre-training for Passage Retrieval in Open-domain Question Answering
Viaarxiv icon

CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction

Add code
Apr 12, 2022
Figure 1 for CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction
Figure 2 for CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction
Figure 3 for CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction
Figure 4 for CorrectSpeech: A Fully Automated System for Speech Correction and Accent Reduction
Viaarxiv icon

PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model

Add code
Apr 07, 2022
Figure 1 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 2 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 3 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Figure 4 for PanGu-Bot: Efficient Generative Dialogue Pre-training from Pre-trained Language Model
Viaarxiv icon

How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis

Add code
Mar 31, 2022
Figure 1 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 2 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 3 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Figure 4 for How Pre-trained Language Models Capture Factual Knowledge? A Causal-Inspired Analysis
Viaarxiv icon

Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation

Add code
Mar 30, 2022
Figure 1 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 2 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 3 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Figure 4 for Enabling Multimodal Generation on CLIP via Vision-Language Knowledge Distillation
Viaarxiv icon

Compression of Generative Pre-trained Language Models via Quantization

Add code
Mar 21, 2022
Figure 1 for Compression of Generative Pre-trained Language Models via Quantization
Figure 2 for Compression of Generative Pre-trained Language Models via Quantization
Figure 3 for Compression of Generative Pre-trained Language Models via Quantization
Figure 4 for Compression of Generative Pre-trained Language Models via Quantization
Viaarxiv icon

Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework

Add code
Mar 10, 2022
Figure 1 for Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework
Figure 2 for Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework
Figure 3 for Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework
Figure 4 for Wukong: 100 Million Large-scale Chinese Cross-modal Pre-training Dataset and A Foundation Framework
Viaarxiv icon