Alert button
Picture for Lili Mou

Lili Mou

Alert button

Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction

Add code
Bookmark button
Alert button
May 04, 2020
Raphael Schumann, Lili Mou, Yao Lu, Olga Vechtomova, Katja Markert

Figure 1 for Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Figure 2 for Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Figure 3 for Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Figure 4 for Discrete Optimization for Unsupervised Sentence Summarization with Word-Level Extraction
Viaarxiv icon

How Chaotic Are Recurrent Neural Networks?

Add code
Bookmark button
Alert button
Apr 28, 2020
Pourya Vakilipourtakalou, Lili Mou

Figure 1 for How Chaotic Are Recurrent Neural Networks?
Figure 2 for How Chaotic Are Recurrent Neural Networks?
Figure 3 for How Chaotic Are Recurrent Neural Networks?
Figure 4 for How Chaotic Are Recurrent Neural Networks?
Viaarxiv icon

TreeGen: A Tree-Based Transformer Architecture for Code Generation

Add code
Bookmark button
Alert button
Nov 28, 2019
Zeyu Sun, Qihao Zhu, Yingfei Xiong, Yican Sun, Lili Mou, Lu Zhang

Figure 1 for TreeGen: A Tree-Based Transformer Architecture for Code Generation
Figure 2 for TreeGen: A Tree-Based Transformer Architecture for Code Generation
Figure 3 for TreeGen: A Tree-Based Transformer Architecture for Code Generation
Figure 4 for TreeGen: A Tree-Based Transformer Architecture for Code Generation
Viaarxiv icon

Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior

Add code
Bookmark button
Alert button
Nov 10, 2019
Amirpasha Ghabussi, Lili Mou, Olga Vechtomova

Figure 1 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 2 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 3 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 4 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Viaarxiv icon

Conditional Response Generation Using Variational Alignment

Add code
Bookmark button
Alert button
Nov 10, 2019
Kashif Khan, Gaurav Sahu, Vikash Balasubramanian, Lili Mou, Olga Vechtomova

Figure 1 for Conditional Response Generation Using Variational Alignment
Figure 2 for Conditional Response Generation Using Variational Alignment
Figure 3 for Conditional Response Generation Using Variational Alignment
Viaarxiv icon

Unsupervised Paraphrasing by Simulated Annealing

Add code
Bookmark button
Alert button
Sep 10, 2019
Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, Sen Song

Figure 1 for Unsupervised Paraphrasing by Simulated Annealing
Figure 2 for Unsupervised Paraphrasing by Simulated Annealing
Figure 3 for Unsupervised Paraphrasing by Simulated Annealing
Figure 4 for Unsupervised Paraphrasing by Simulated Annealing
Viaarxiv icon

Generating Sentences from Disentangled Syntactic and Semantic Spaces

Add code
Bookmark button
Alert button
Jul 06, 2019
Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xinyu Dai, Jiajun Chen

Figure 1 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 2 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 3 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 4 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Viaarxiv icon

An Imitation Learning Approach to Unsupervised Parsing

Add code
Bookmark button
Alert button
Jun 05, 2019
Bowen Li, Lili Mou, Frank Keller

Figure 1 for An Imitation Learning Approach to Unsupervised Parsing
Figure 2 for An Imitation Learning Approach to Unsupervised Parsing
Figure 3 for An Imitation Learning Approach to Unsupervised Parsing
Figure 4 for An Imitation Learning Approach to Unsupervised Parsing
Viaarxiv icon

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Add code
Bookmark button
Alert button
Mar 28, 2019
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin

Figure 1 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 2 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 3 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 4 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Viaarxiv icon