Picture for Mingxuan Wang

Mingxuan Wang

Tony

CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning

Add code
Oct 08, 2022
Figure 1 for CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning
Figure 2 for CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning
Figure 3 for CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning
Figure 4 for CoBERT: Self-Supervised Speech Representation Learning Through Code Representation Learning
Viaarxiv icon

PARAGEN : A Parallel Generation Toolkit

Add code
Oct 07, 2022
Figure 1 for PARAGEN : A Parallel Generation Toolkit
Figure 2 for PARAGEN : A Parallel Generation Toolkit
Figure 3 for PARAGEN : A Parallel Generation Toolkit
Figure 4 for PARAGEN : A Parallel Generation Toolkit
Viaarxiv icon

Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts

Add code
Sep 23, 2022
Figure 1 for Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts
Figure 2 for Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts
Figure 3 for Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts
Figure 4 for Zero-shot Domain Adaptation for Neural Machine Translation with Retrieved Phrase-level Prompts
Viaarxiv icon

Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation

Add code
May 18, 2022
Figure 1 for Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation
Figure 2 for Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation
Figure 3 for Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation
Figure 4 for Leveraging Pseudo-labeled Data to Improve Direct Speech-to-Speech Translation
Viaarxiv icon

Cross-modal Contrastive Learning for Speech Translation

Add code
May 05, 2022
Figure 1 for Cross-modal Contrastive Learning for Speech Translation
Figure 2 for Cross-modal Contrastive Learning for Speech Translation
Figure 3 for Cross-modal Contrastive Learning for Speech Translation
Figure 4 for Cross-modal Contrastive Learning for Speech Translation
Viaarxiv icon

GigaST: A 10,000-hour Pseudo Speech Translation Corpus

Add code
Apr 08, 2022
Figure 1 for GigaST: A 10,000-hour Pseudo Speech Translation Corpus
Figure 2 for GigaST: A 10,000-hour Pseudo Speech Translation Corpus
Figure 3 for GigaST: A 10,000-hour Pseudo Speech Translation Corpus
Figure 4 for GigaST: A 10,000-hour Pseudo Speech Translation Corpus
Viaarxiv icon

A Roadmap for Big Model

Add code
Apr 02, 2022
Figure 1 for A Roadmap for Big Model
Figure 2 for A Roadmap for Big Model
Figure 3 for A Roadmap for Big Model
Figure 4 for A Roadmap for Big Model
Viaarxiv icon

STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation

Add code
Mar 20, 2022
Figure 1 for STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
Figure 2 for STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
Figure 3 for STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
Figure 4 for STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation
Viaarxiv icon

Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus

Add code
Jan 24, 2022
Figure 1 for Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus
Figure 2 for Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus
Figure 3 for Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus
Figure 4 for Unified Multimodal Punctuation Restoration Framework for Mixed-Modality Corpus
Viaarxiv icon

LightSeq2: Accelerated Training for Transformer-based Models on GPUs

Add code
Oct 27, 2021
Figure 1 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 2 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 3 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Figure 4 for LightSeq2: Accelerated Training for Transformer-based Models on GPUs
Viaarxiv icon