Picture for Lizhen Qu

Lizhen Qu

Document Flattening: Beyond Concatenating Context for Document-Level Neural Machine Translation

Add code
Feb 16, 2023
Viaarxiv icon

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

Add code
Dec 20, 2022
Viaarxiv icon

Let's Negotiate! A Survey of Negotiation Dialogue Systems

Add code
Dec 18, 2022
Viaarxiv icon

Learning Object-Language Alignments for Open-Vocabulary Object Detection

Add code
Nov 27, 2022
Viaarxiv icon

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

Add code
Oct 11, 2022
Figure 1 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 2 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 3 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 4 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Viaarxiv icon

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

Add code
Mar 15, 2022
Figure 1 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 2 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 3 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 4 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Viaarxiv icon

Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation

Add code
Nov 10, 2021
Figure 1 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 2 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 3 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 4 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Viaarxiv icon

Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model

Add code
Oct 13, 2021
Figure 1 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 2 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 3 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 4 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Viaarxiv icon

Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers

Add code
Sep 15, 2021
Figure 1 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 2 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 3 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 4 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Viaarxiv icon

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Add code
Aug 29, 2021
Figure 1 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 2 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 3 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 4 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Viaarxiv icon