Alert button
Picture for Olga Vechtomova

Olga Vechtomova

Alert button

Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior

Add code
Bookmark button
Alert button
Nov 10, 2019
Amirpasha Ghabussi, Lili Mou, Olga Vechtomova

Figure 1 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 2 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 3 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Figure 4 for Stylized Text Generation Using Wasserstein Autoencoders with a Mixture of Gaussian Prior
Viaarxiv icon

Dynamic Fusion for Multimodal Data

Add code
Bookmark button
Alert button
Nov 10, 2019
Gaurav Sahu, Olga Vechtomova

Figure 1 for Dynamic Fusion for Multimodal Data
Figure 2 for Dynamic Fusion for Multimodal Data
Figure 3 for Dynamic Fusion for Multimodal Data
Figure 4 for Dynamic Fusion for Multimodal Data
Viaarxiv icon

Conditional Response Generation Using Variational Alignment

Add code
Bookmark button
Alert button
Nov 10, 2019
Kashif Khan, Gaurav Sahu, Vikash Balasubramanian, Lili Mou, Olga Vechtomova

Figure 1 for Conditional Response Generation Using Variational Alignment
Figure 2 for Conditional Response Generation Using Variational Alignment
Figure 3 for Conditional Response Generation Using Variational Alignment
Viaarxiv icon

Generating Sentences from Disentangled Syntactic and Semantic Spaces

Add code
Bookmark button
Alert button
Jul 06, 2019
Yu Bao, Hao Zhou, Shujian Huang, Lei Li, Lili Mou, Olga Vechtomova, Xinyu Dai, Jiajun Chen

Figure 1 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 2 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 3 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Figure 4 for Generating Sentences from Disentangled Syntactic and Semantic Spaces
Viaarxiv icon

Distilling Task-Specific Knowledge from BERT into Simple Neural Networks

Add code
Bookmark button
Alert button
Mar 28, 2019
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, Jimmy Lin

Figure 1 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 2 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 3 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Figure 4 for Distilling Task-Specific Knowledge from BERT into Simple Neural Networks
Viaarxiv icon

Generating lyrics with variational autoencoder and multi-modal artist embeddings

Add code
Bookmark button
Alert button
Dec 20, 2018
Olga Vechtomova, Hareesh Bahuleyan, Amirpasha Ghabussi, Vineet John

Figure 1 for Generating lyrics with variational autoencoder and multi-modal artist embeddings
Figure 2 for Generating lyrics with variational autoencoder and multi-modal artist embeddings
Figure 3 for Generating lyrics with variational autoencoder and multi-modal artist embeddings
Figure 4 for Generating lyrics with variational autoencoder and multi-modal artist embeddings
Viaarxiv icon

Disentangled Representation Learning for Non-Parallel Text Style Transfer

Add code
Bookmark button
Alert button
Sep 11, 2018
Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova

Figure 1 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 2 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 3 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Figure 4 for Disentangled Representation Learning for Non-Parallel Text Style Transfer
Viaarxiv icon

Probabilistic Natural Language Generation with Wasserstein Autoencoders

Add code
Bookmark button
Alert button
Jun 22, 2018
Hareesh Bahuleyan, Lili Mou, Kartik Vamaraju, Hao Zhou, Olga Vechtomova

Figure 1 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 2 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 3 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Figure 4 for Probabilistic Natural Language Generation with Wasserstein Autoencoders
Viaarxiv icon

Variational Attention for Sequence-to-Sequence Models

Add code
Bookmark button
Alert button
Jun 21, 2018
Hareesh Bahuleyan, Lili Mou, Olga Vechtomova, Pascal Poupart

Figure 1 for Variational Attention for Sequence-to-Sequence Models
Figure 2 for Variational Attention for Sequence-to-Sequence Models
Figure 3 for Variational Attention for Sequence-to-Sequence Models
Figure 4 for Variational Attention for Sequence-to-Sequence Models
Viaarxiv icon

Sentiment Analysis on Financial News Headlines using Training Dataset Augmentation

Add code
Bookmark button
Alert button
Jul 29, 2017
Vineet John, Olga Vechtomova

Figure 1 for Sentiment Analysis on Financial News Headlines using Training Dataset Augmentation
Figure 2 for Sentiment Analysis on Financial News Headlines using Training Dataset Augmentation
Viaarxiv icon