Alert button
Picture for Lili Mou

Lili Mou

Alert button

CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling

Add code
Bookmark button
Alert button
Nov 14, 2018
Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li

Figure 1 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 2 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 3 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Figure 4 for CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling
Viaarxiv icon

A Grammar-Based Structural CNN Decoder for Code Generation

Add code
Bookmark button
Alert button
Nov 14, 2018
Zeyu Sun, Qihao Zhu, Lili Mou, Yingfei Xiong, Ge Li, Lu Zhang

Figure 1 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 2 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 3 for A Grammar-Based Structural CNN Decoder for Code Generation
Figure 4 for A Grammar-Based Structural CNN Decoder for Code Generation
Viaarxiv icon

Progressive Memory Banks for Incremental Domain Adaptation

Add code
Bookmark button
Alert button
Nov 01, 2018
Nabiha Asghar, Lili Mou, Kira A. Selby, Kevin D. Pantasdo, Pascal Poupart, Xin Jiang

Figure 1 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 2 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 3 for Progressive Memory Banks for Incremental Domain Adaptation
Figure 4 for Progressive Memory Banks for Incremental Domain Adaptation
Viaarxiv icon

Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection

Add code
Bookmark button
Alert button
Sep 28, 2018
Zhao Meng, Lili Mou, Zhi Jin

Figure 1 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 2 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 3 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Figure 4 for Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Viaarxiv icon

Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models

Add code
Bookmark button
Alert button
Sep 28, 2018
Zhao Meng, Lili Mou, Zhi Jin

Figure 1 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 2 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 3 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Figure 4 for Towards Neural Speaker Modeling in Multi-Party Conversation: The Task, Dataset, and Models
Viaarxiv icon

Disentangled Representation Learning for Non-Parallel Text Style Transfer

Add code
Bookmark button
Alert button
Sep 11, 2018
Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova

Figure 1 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 2 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 3 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 4 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Viaarxiv icon

JUMPER: Learning When to Make Classification Decisions in Reading

Add code
Bookmark button
Alert button
Jul 06, 2018
Xianggen Liu, Lili Mou, Haotian Cui, Zhengdong Lu, Sen Song

Figure 1 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 2 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 3 for JUMPER: Learning When to Make Classification Decisions in Reading
Figure 4 for JUMPER: Learning When to Make Classification Decisions in Reading
Viaarxiv icon

Probabilistic Natural Language Generation with Wasserstein Autoencoders

Add code
Bookmark button
Alert button
Jun 22, 2018
Hareesh Bahuleyan, Lili Mou, Kartik Vamaraju, Hao Zhou, Olga Vechtomova

Figure 1 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 2 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 3 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 4 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Viaarxiv icon

Variational Attention for Sequence-to-Sequence Models

Add code
Bookmark button
Alert button
Jun 21, 2018
Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart

Figure 1 for Variational Attention for Sequence-to-Sequence Models
Figure 2 for Variational Attention for Sequence-to-Sequence Models
Figure 3 for Variational Attention for Sequence-to-Sequence Models
Figure 4 for Variational Attention for Sequence-to-Sequence Models
Viaarxiv icon

Modeling Past and Future for Neural Machine Translation

Add code
Bookmark button
Alert button
Dec 26, 2017
Zaixiang Zheng, Hao Zhou, Shujian Huang, Lili Mou, Xinyu Dai, Jiajun Chen, Zhaopeng Tu

Viaarxiv icon