Picture for Tie-Yan Liu

Tie-Yan Liu

Reinforcement Learning with Dynamic Boltzmann Softmax Updates

Add code
Mar 15, 2019
Figure 1 for Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Figure 2 for Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Figure 3 for Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Figure 4 for Reinforcement Learning with Dynamic Boltzmann Softmax Updates
Viaarxiv icon

Positively Scale-Invariant Flatness of ReLU Neural Networks

Add code
Mar 06, 2019
Figure 1 for Positively Scale-Invariant Flatness of ReLU Neural Networks
Figure 2 for Positively Scale-Invariant Flatness of ReLU Neural Networks
Figure 3 for Positively Scale-Invariant Flatness of ReLU Neural Networks
Viaarxiv icon

A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network

Add code
Mar 02, 2019
Figure 1 for A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network
Figure 2 for A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network
Figure 3 for A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network
Figure 4 for A Cooperative Multi-Agent Reinforcement Learning Framework for Resource Balancing in Complex Logistics Network
Viaarxiv icon

Multilingual Neural Machine Translation with Knowledge Distillation

Add code
Feb 28, 2019
Figure 1 for Multilingual Neural Machine Translation with Knowledge Distillation
Figure 2 for Multilingual Neural Machine Translation with Knowledge Distillation
Figure 3 for Multilingual Neural Machine Translation with Knowledge Distillation
Figure 4 for Multilingual Neural Machine Translation with Knowledge Distillation
Viaarxiv icon

Non-Autoregressive Machine Translation with Auxiliary Regularization

Add code
Feb 22, 2019
Figure 1 for Non-Autoregressive Machine Translation with Auxiliary Regularization
Figure 2 for Non-Autoregressive Machine Translation with Auxiliary Regularization
Figure 3 for Non-Autoregressive Machine Translation with Auxiliary Regularization
Figure 4 for Non-Autoregressive Machine Translation with Auxiliary Regularization
Viaarxiv icon

Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input

Add code
Dec 23, 2018
Figure 1 for Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
Figure 2 for Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
Figure 3 for Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
Figure 4 for Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input
Viaarxiv icon

Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks

Add code
Nov 13, 2018
Figure 1 for Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks
Figure 2 for Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks
Figure 3 for Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks
Figure 4 for Modeling Local Dependence in Natural Language with Multi-channel Recurrent Neural Networks
Viaarxiv icon

Neural Architecture Optimization

Add code
Oct 31, 2018
Figure 1 for Neural Architecture Optimization
Figure 2 for Neural Architecture Optimization
Figure 3 for Neural Architecture Optimization
Figure 4 for Neural Architecture Optimization
Viaarxiv icon

Learning to Teach with Dynamic Loss Functions

Add code
Oct 29, 2018
Figure 1 for Learning to Teach with Dynamic Loss Functions
Figure 2 for Learning to Teach with Dynamic Loss Functions
Figure 3 for Learning to Teach with Dynamic Loss Functions
Figure 4 for Learning to Teach with Dynamic Loss Functions
Viaarxiv icon

$\mathcal{G}$-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space

Add code
Oct 09, 2018
Figure 1 for $\mathcal{G}$-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Figure 2 for $\mathcal{G}$-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Figure 3 for $\mathcal{G}$-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Figure 4 for $\mathcal{G}$-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space
Viaarxiv icon