Picture for Ming Zhou

Ming Zhou

Department of Pathology, UT Southwestern Medical Center, Dallas, TX, USA

STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization

Add code
Apr 04, 2020
Figure 1 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 2 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 3 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Figure 4 for STEP: Sequence-to-Sequence Transformer Pre-training for Document Summarization
Viaarxiv icon

XGPT: Cross-modal Generative Pre-Training for Image Captioning

Add code
Mar 04, 2020
Figure 1 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 2 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 3 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Figure 4 for XGPT: Cross-modal Generative Pre-Training for Image Captioning
Viaarxiv icon

UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training

Add code
Feb 28, 2020
Figure 1 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 2 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 3 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Figure 4 for UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training
Viaarxiv icon

MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers

Add code
Feb 25, 2020
Figure 1 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 2 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 3 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Figure 4 for MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Viaarxiv icon

ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training

Add code
Feb 22, 2020
Figure 1 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 2 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 3 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Figure 4 for ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
Viaarxiv icon

CodeBERT: A Pre-Trained Model for Programming and Natural Languages

Add code
Feb 19, 2020
Figure 1 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 2 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 3 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Figure 4 for CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Viaarxiv icon

LayoutLM: Pre-training of Text and Layout for Document Image Understanding

Add code
Feb 19, 2020
Figure 1 for LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Figure 2 for LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Figure 3 for LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Figure 4 for LayoutLM: Pre-training of Text and Layout for Document Image Understanding
Viaarxiv icon

UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation

Add code
Feb 15, 2020
Figure 1 for UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
Figure 2 for UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
Figure 3 for UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
Figure 4 for UniViLM: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
Viaarxiv icon

Self-Adversarial Learning with Comparative Discrimination for Text Generation

Add code
Feb 12, 2020
Figure 1 for Self-Adversarial Learning with Comparative Discrimination for Text Generation
Figure 2 for Self-Adversarial Learning with Comparative Discrimination for Text Generation
Figure 3 for Self-Adversarial Learning with Comparative Discrimination for Text Generation
Figure 4 for Self-Adversarial Learning with Comparative Discrimination for Text Generation
Viaarxiv icon

BERT-of-Theseus: Compressing BERT by Progressive Module Replacing

Add code
Feb 10, 2020
Figure 1 for BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Figure 2 for BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Figure 3 for BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Figure 4 for BERT-of-Theseus: Compressing BERT by Progressive Module Replacing
Viaarxiv icon