Picture for Nevin L. Zhang

Nevin L. Zhang

Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization

Add code
Aug 01, 2021
Figure 1 for Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization
Figure 2 for Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization
Figure 3 for Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization
Figure 4 for Enhancing Content Preservation in Text Style Transfer Using Reverse Attention and Conditional Layer Normalization
Viaarxiv icon

DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling

Add code
Jul 05, 2021
Figure 1 for DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
Figure 2 for DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
Figure 3 for DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
Figure 4 for DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling
Viaarxiv icon

Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks

Add code
May 21, 2021
Figure 1 for Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks
Figure 2 for Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks
Figure 3 for Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks
Figure 4 for Learning from My Friends: Few-Shot Personalized Conversation Systems via Social Networks
Viaarxiv icon

Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling

Add code
Jul 10, 2020
Figure 1 for Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling
Figure 2 for Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling
Figure 3 for Handling Collocations in Hierarchical Latent Tree Analysis for Topic Modeling
Viaarxiv icon

Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation

Add code
May 13, 2020
Figure 1 for Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation
Figure 2 for Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation
Figure 3 for Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation
Figure 4 for Response-Anticipated Memory for On-Demand Knowledge Integration in Response Generation
Viaarxiv icon

Not All Attention Is Needed: Gated Attention Network for Sequence Data

Add code
Dec 01, 2019
Figure 1 for Not All Attention Is Needed: Gated Attention Network for Sequence Data
Figure 2 for Not All Attention Is Needed: Gated Attention Network for Sequence Data
Figure 3 for Not All Attention Is Needed: Gated Attention Network for Sequence Data
Figure 4 for Not All Attention Is Needed: Gated Attention Network for Sequence Data
Viaarxiv icon

Cleaned Similarity for Better Memory-Based Recommenders

Add code
May 17, 2019
Figure 1 for Cleaned Similarity for Better Memory-Based Recommenders
Figure 2 for Cleaned Similarity for Better Memory-Based Recommenders
Figure 3 for Cleaned Similarity for Better Memory-Based Recommenders
Figure 4 for Cleaned Similarity for Better Memory-Based Recommenders
Viaarxiv icon

Using Taste Groups for Collaborative Filtering

Add code
Aug 28, 2018
Figure 1 for Using Taste Groups for Collaborative Filtering
Figure 2 for Using Taste Groups for Collaborative Filtering
Figure 3 for Using Taste Groups for Collaborative Filtering
Viaarxiv icon

Matrix Factorization Equals Efficient Co-occurrence Representation

Add code
Aug 28, 2018
Figure 1 for Matrix Factorization Equals Efficient Co-occurrence Representation
Figure 2 for Matrix Factorization Equals Efficient Co-occurrence Representation
Figure 3 for Matrix Factorization Equals Efficient Co-occurrence Representation
Viaarxiv icon

Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis

Add code
Aug 05, 2018
Figure 1 for Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis
Figure 2 for Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis
Figure 3 for Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis
Figure 4 for Sparse Boltzmann Machines with Structure Learning as Applied to Text Analysis
Viaarxiv icon