Picture for Lili Mou

Lili Mou

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Add code
Mar 28, 2019
Figure 1 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 2 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 3 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 4 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Viaarxiv icon

CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling

Add code
Nov 14, 2018
Figure 1 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 2 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 3 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 4 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Viaarxiv icon

A Grammar-Based Structural CNN Decoder for Code Generation

Add code
Nov 14, 2018
Figure 1 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 2 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 3 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 4 for A Grammar-Based Structural CNN Decoder for Code Generation
Viaarxiv icon

Progressive Memory Banks for Incremental Domain Adaptation

Add code
Nov 01, 2018
Figure 1 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 2 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 3 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 4 for Progressive Memory Banks for Incremental Domain Adaptation
Viaarxiv icon

Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection

Add code
Sep 28, 2018
Figure 1 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 2 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 3 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 4 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Viaarxiv icon

Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models

Add code
Sep 28, 2018
Figure 1 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 2 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 3 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 4 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Viaarxiv icon

Disentangled Representation Learning for Non-Parallel Text Style Transfer

Add code
Sep 11, 2018
Figure 1 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 2 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 3 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 4 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Viaarxiv icon

JUMPER: Learning When to Make Classification Decisions in Reading

Add code
Jul 06, 2018
Figure 1 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 2 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 3 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 4 for JUMPER: Learning When to Make Classification Decisions in Reading
Viaarxiv icon

Probabilistic Natural Language Generation with Wasserstein Autoencoders

Add code
Jun 22, 2018
Figure 1 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 2 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 3 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 4 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Viaarxiv icon

Variational Attention for Sequence-to-Sequence Models

Add code
Jun 21, 2018
Figure 1 for Variational Attention for Sequence-to-Sequence Models
Figure 2 for Variational Attention for Sequence-to-Sequence Models
Figure 3 for Variational Attention for Sequence-to-Sequence Models
Figure 4 for Variational Attention for Sequence-to-Sequence Models
Viaarxiv icon