Picture for Tie-Yan Liu

Tie-Yan Liu

Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data

Add code
May 29, 2019
Figure 1 for Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Figure 2 for Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Figure 3 for Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Figure 4 for Convergence of Distributed Stochastic Variance Reduced Methods without Sampling Extra Data
Viaarxiv icon

FastSpeech: Fast, Robust and Controllable Text to Speech

Add code
May 29, 2019
Figure 1 for FastSpeech: Fast, Robust and Controllable Text to Speech
Figure 2 for FastSpeech: Fast, Robust and Controllable Text to Speech
Figure 3 for FastSpeech: Fast, Robust and Controllable Text to Speech
Figure 4 for FastSpeech: Fast, Robust and Controllable Text to Speech
Viaarxiv icon

Beyond Exponentially Discounted Sum: Automatic Learning of Return Function

Add code
May 28, 2019
Figure 1 for Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
Figure 2 for Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
Figure 3 for Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
Figure 4 for Beyond Exponentially Discounted Sum: Automatic Learning of Return Function
Viaarxiv icon

Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy

Add code
May 28, 2019
Figure 1 for Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy
Figure 2 for Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy
Figure 3 for Learning Efficient and Effective Exploration Policies with Counterfactual Meta Policy
Viaarxiv icon

Soft Contextual Data Augmentation for Neural Machine Translation

Add code
May 25, 2019
Figure 1 for Soft Contextual Data Augmentation for Neural Machine Translation
Figure 2 for Soft Contextual Data Augmentation for Neural Machine Translation
Figure 3 for Soft Contextual Data Augmentation for Neural Machine Translation
Viaarxiv icon

Almost Unsupervised Text to Speech and Automatic Speech Recognition

Add code
May 22, 2019
Figure 1 for Almost Unsupervised Text to Speech and Automatic Speech Recognition
Figure 2 for Almost Unsupervised Text to Speech and Automatic Speech Recognition
Figure 3 for Almost Unsupervised Text to Speech and Automatic Speech Recognition
Figure 4 for Almost Unsupervised Text to Speech and Automatic Speech Recognition
Viaarxiv icon

MASS: Masked Sequence to Sequence Pre-training for Language Generation

Add code
May 13, 2019
Figure 1 for MASS: Masked Sequence to Sequence Pre-training for Language Generation
Figure 2 for MASS: Masked Sequence to Sequence Pre-training for Language Generation
Figure 3 for MASS: Masked Sequence to Sequence Pre-training for Language Generation
Figure 4 for MASS: Masked Sequence to Sequence Pre-training for Language Generation
Viaarxiv icon

Adaptive Regret of Convex and Smooth Functions

Add code
May 09, 2019
Figure 1 for Adaptive Regret of Convex and Smooth Functions
Figure 2 for Adaptive Regret of Convex and Smooth Functions
Figure 3 for Adaptive Regret of Convex and Smooth Functions
Viaarxiv icon

Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion

Add code
Apr 06, 2019
Figure 1 for Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion
Figure 2 for Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion
Figure 3 for Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion
Figure 4 for Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion
Viaarxiv icon

Training Over-parameterized Deep ResNet Is almost as Easy as Training a Two-layer Network

Add code
Mar 17, 2019
Figure 1 for Training Over-parameterized Deep ResNet Is almost as Easy as Training a Two-layer Network
Figure 2 for Training Over-parameterized Deep ResNet Is almost as Easy as Training a Two-layer Network
Figure 3 for Training Over-parameterized Deep ResNet Is almost as Easy as Training a Two-layer Network
Viaarxiv icon