Picture for Ming Zhou

Ming Zhou

Department of Pathology, UT Southwestern Medical Center, Dallas, TX, USA

Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space

Add code
Oct 04, 2020
Figure 1 for Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space
Figure 2 for Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space
Figure 3 for Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space
Figure 4 for Tell Me How to Ask Again: Question Data Augmentation with Controllable Rewriting in Continuous Space
Viaarxiv icon

GraphCodeBERT: Pre-training Code Representations with Data Flow

Add code
Sep 29, 2020
Figure 1 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 2 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 3 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Figure 4 for GraphCodeBERT: Pre-training Code Representations with Data Flow
Viaarxiv icon

CodeBLEU: a Method for Automatic Evaluation of Code Synthesis

Add code
Sep 27, 2020
Figure 1 for CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Figure 2 for CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Figure 3 for CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Figure 4 for CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Viaarxiv icon

Continuous Speech Separation with Conformer

Add code
Aug 13, 2020
Figure 1 for Continuous Speech Separation with Conformer
Figure 2 for Continuous Speech Separation with Conformer
Figure 3 for Continuous Speech Separation with Conformer
Figure 4 for Continuous Speech Separation with Conformer
Viaarxiv icon

InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training

Add code
Jul 15, 2020
Figure 1 for InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training
Figure 2 for InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training
Figure 3 for InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training
Figure 4 for InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training
Viaarxiv icon

Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

Add code
Jun 15, 2020
Figure 1 for Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Figure 2 for Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Figure 3 for Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Figure 4 for Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder
Viaarxiv icon

M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training

Add code
Jun 04, 2020
Figure 1 for M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training
Figure 2 for M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training
Figure 3 for M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training
Figure 4 for M3P: Learning Universal Representations via Multitask Multilingual Multimodal Pre-training
Viaarxiv icon

DocBank: A Benchmark Dataset for Document Layout Analysis

Add code
Jun 01, 2020
Figure 1 for DocBank: A Benchmark Dataset for Document Layout Analysis
Figure 2 for DocBank: A Benchmark Dataset for Document Layout Analysis
Figure 3 for DocBank: A Benchmark Dataset for Document Layout Analysis
Figure 4 for DocBank: A Benchmark Dataset for Document Layout Analysis
Viaarxiv icon

Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension

Add code
May 13, 2020
Figure 1 for Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Figure 2 for Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Figure 3 for Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Figure 4 for Document Modeling with Graph Attention Networks for Multi-grained Machine Reading Comprehension
Viaarxiv icon

Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection

Add code
Apr 29, 2020
Figure 1 for Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
Figure 2 for Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
Figure 3 for Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
Figure 4 for Leveraging Declarative Knowledge in Text and First-Order Logic for Fine-Grained Propaganda Detection
Viaarxiv icon