Alert button
Picture for Lizhen Qu

Lizhen Qu

Alert button

Learning Object-Language Alignments for Open-Vocabulary Object Detection

Nov 27, 2022
Chuang Lin, Peize Sun, Yi Jiang, Ping Luo, Lizhen Qu, Gholamreza Haffari, Zehuan Yuan, Jianfei Cai

Figure 1 for Learning Object-Language Alignments for Open-Vocabulary Object Detection
Figure 2 for Learning Object-Language Alignments for Open-Vocabulary Object Detection
Figure 3 for Learning Object-Language Alignments for Open-Vocabulary Object Detection
Figure 4 for Learning Object-Language Alignments for Open-Vocabulary Object Detection

Existing object detection methods are bounded in a fixed-set vocabulary by costly labeled data. When dealing with novel categories, the model has to be retrained with more bounding box annotations. Natural language supervision is an attractive alternative for its annotation-free attributes and broader object concepts. However, learning open-vocabulary object detection from language is challenging since image-text pairs do not contain fine-grained object-language alignments. Previous solutions rely on either expensive grounding annotations or distilling classification-oriented vision models. In this paper, we propose a novel open-vocabulary object detection framework directly learning from image-text pair data. We formulate object-language alignment as a set matching problem between a set of image region features and a set of word embeddings. It enables us to train an open-vocabulary object detector on image-text pairs in a much simple and effective way. Extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance over the competing approaches on novel categories, e.g. achieving 32.0% mAP on COCO and 21.7% mask mAP on LVIS. Code is available at: https://github.com/clin1223/VLDet.

* Technical Report 
Viaarxiv icon

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

Oct 11, 2022
Terry Yue Zhuo, Yaqing Liao, Yuecheng Lei, Lizhen Qu, Gerard de Melo, Xiaojun Chang, Yazhou Ren, Zenglin Xu

Figure 1 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 2 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 3 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 4 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

We introduce ViLPAct, a novel vision-language benchmark for human activity planning. It is designed for a task where embodied AI agents can reason and forecast future actions of humans based on video clips about their initial activities and intents in text. The dataset consists of 2.9k videos from \charades extended with intents via crowdsourcing, a multi-choice question test set, and four strong baselines. One of the baselines implements a neurosymbolic approach based on a multi-modal knowledge base (MKB), while the other ones are deep generative models adapted from recent state-of-the-art (SOTA) methods. According to our extensive experiments, the key challenges are compositional generalization and effective use of information from both modalities.

Viaarxiv icon

Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

Mar 15, 2022
Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari

Figure 1 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 2 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 3 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Figure 4 for Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation

In this paper, we propose a variational autoencoder with disentanglement priors, VAE-DPRIOR, for conditional natural language generation with none or a handful of task-specific labeled examples. In order to improve compositional generalization, our model performs disentangled representation learning by introducing a prior for the latent content space and another prior for the latent label space. We show both empirically and theoretically that the conditional priors can already disentangle representations even without specific regularizations as in the prior work. We can also sample diverse content representations from the content space without accessing data of the seen tasks, and fuse them with the representations of novel tasks for generating diverse texts in the low-resource settings. Our extensive experiments demonstrate the superior performance of our model over competitive baselines in terms of i) data augmentation in continuous zero/few-shot learning, and ii) text style transfer in both zero/few-shot settings.

* 11 pages 
Viaarxiv icon

Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation

Nov 10, 2021
Chuang Lin, Yi Jiang, Jianfei Cai, Lizhen Qu, Gholamreza Haffari, Zehuan Yuan

Figure 1 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 2 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 3 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation
Figure 4 for Multimodal Transformer with Variable-length Memory for Vision-and-Language Navigation

Vision-and-Language Navigation (VLN) is a task that an agent is required to follow a language instruction to navigate to the goal position, which relies on the ongoing interactions with the environment during moving. Recent Transformer-based VLN methods have made great progress benefiting from the direct connections between visual observations and the language instruction via the multimodal cross-attention mechanism. However, these methods usually represent temporal context as a fixed-length vector by using an LSTM decoder or using manually designed hidden states to build a recurrent Transformer. Considering a single fixed-length vector is often insufficient to capture long-term temporal context, in this paper, we introduce Multimodal Transformer with Variable-length Memory (MTVM) for visually-grounded natural language navigation by modelling the temporal context explicitly. Specifically, MTVM enables the agent to keep track of the navigation trajectory by directly storing previous activations in a memory bank. To further boost the performance, we propose a memory-aware consistency loss to help learn a better joint representation of temporal context with random masked instructions. We evaluate MTVM on popular R2R and CVDN datasets, and our model improves Success Rate on R2R unseen validation and test set by 2% each, and reduce Goal Process by 1.6m on CVDN test set.

Viaarxiv icon

Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model

Oct 13, 2021
Sheng Bi, Xiya Cheng, Yuan-Fang Li, Lizhen Qu, Shirong Shen, Guilin Qi, Lu Pan, Yinlin Jiang

Figure 1 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 2 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 3 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model
Figure 4 for Simple or Complex? Complexity-Controllable Question Generation with Soft Templates and Deep Mixture of Experts Model

The ability to generate natural-language questions with controlled complexity levels is highly desirable as it further expands the applicability of question generation. In this paper, we propose an end-to-end neural complexity-controllable question generation model, which incorporates a mixture of experts (MoE) as the selector of soft templates to improve the accuracy of complexity control and the quality of generated questions. The soft templates capture question similarity while avoiding the expensive construction of actual templates. Our method introduces a novel, cross-domain complexity estimator to assess the complexity of a question, taking into account the passage, the question, the answer and their interactions. The experimental results on two benchmark QA datasets demonstrate that our QG model is superior to state-of-the-art methods in both automatic and manual evaluation. Moreover, our complexity estimator is significantly more accurate than the baselines in both in-domain and out-domain settings.

* Accepted to Findings of EMNLP 2021 
Viaarxiv icon

Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers

Sep 15, 2021
Zhuang Li, Lizhen Qu, Gholamreza Haffari

Figure 1 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 2 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 3 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers
Figure 4 for Total Recall: a Customized Continual Learning Method for Neural Semantic Parsers

This paper investigates continual learning for semantic parsing. In this setting, a neural semantic parser learns tasks sequentially without accessing full training data from previous tasks. Direct application of the SOTA continual learning algorithms to this problem fails to achieve comparable performance with re-training models with all seen tasks because they have not considered the special properties of structured outputs yielded by semantic parsers. Therefore, we propose TotalRecall, a continual learning method designed for neural semantic parsers from two aspects: i) a sampling method for memory replay that diversifies logical form templates and balances distributions of parse actions in a memory; ii) a two-stage training method that significantly improves generalization capability of the parsers across tasks. We conduct extensive experiments to study the research problems involved in continual semantic parsing and demonstrate that a neural semantic parser trained with TotalRecall achieves superior performance than the one trained directly with the SOTA continual learning algorithms and achieve a 3-6 times speedup compared to re-training from scratch. Code and datasets are available at: https://github.com/zhuang-li/cl_nsp.

* 9 pages, accepted to EMNLP2021 
Viaarxiv icon

Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Aug 29, 2021
Qiongkai Xu, Xuanli He, Lingjuan Lyu, Lizhen Qu, Gholamreza Haffari

Figure 1 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 2 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 3 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs
Figure 4 for Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs

Machine-learning-as-a-service (MLaaS) has attracted millions of users to their outperforming sophisticated models. Although published as black-box APIs, the valuable models behind these services are still vulnerable to imitation attacks. Recently, a series of works have demonstrated that attackers manage to steal or extract the victim models. Nonetheless, none of the previous stolen models can outperform the original black-box APIs. In this work, we take the first step of showing that attackers could potentially surpass victims via unsupervised domain adaptation and multi-victim ensemble. Extensive experiments on benchmark datasets and real-world APIs validate that the imitators can succeed in outperforming the original black-box models. We consider this as a milestone in the research of imitation attack, especially on NLP APIs, as the superior performance could influence the defense or even publishing strategy of API providers.

Viaarxiv icon

Neural-Symbolic Commonsense Reasoner with Relation Predictors

May 14, 2021
Farhad Moghimifar, Lizhen Qu, Yue Zhuo, Gholamreza Haffari, Mahsa Baktashmotlagh

Figure 1 for Neural-Symbolic Commonsense Reasoner with Relation Predictors
Figure 2 for Neural-Symbolic Commonsense Reasoner with Relation Predictors
Figure 3 for Neural-Symbolic Commonsense Reasoner with Relation Predictors
Figure 4 for Neural-Symbolic Commonsense Reasoner with Relation Predictors

Commonsense reasoning aims to incorporate sets of commonsense facts, retrieved from Commonsense Knowledge Graphs (CKG), to draw conclusion about ordinary situations. The dynamic nature of commonsense knowledge postulates models capable of performing multi-hop reasoning over new situations. This feature also results in having large-scale sparse Knowledge Graphs, where such reasoning process is needed to predict relations between new events. However, existing approaches in this area are limited by considering CKGs as a limited set of facts, thus rendering them unfit for reasoning over new unseen situations and events. In this paper, we present a neural-symbolic reasoner, which is capable of reasoning over large-scale dynamic CKGs. The logic rules for reasoning over CKGs are learned during training by our model. In addition to providing interpretable explanation, the learned logic rules help to generalise prediction to newly introduced events. Experimental results on the task of link prediction on CKGs prove the effectiveness of our model by outperforming the state-of-the-art models.

* ACL2021 
Viaarxiv icon

On Robustness of Neural Semantic Parsers

Feb 03, 2021
Shuo Huang, Zhuang Li, Lizhen Qu, Lei Pan

Figure 1 for On Robustness of Neural Semantic Parsers
Figure 2 for On Robustness of Neural Semantic Parsers
Figure 3 for On Robustness of Neural Semantic Parsers
Figure 4 for On Robustness of Neural Semantic Parsers

Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples. In this paper, we provide the empirical study on the robustness of semantic parsers in the presence of adversarial attacks. Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs, whose utterances have exactly the same meanings as the original ones. A scalable methodology is proposed to construct robustness test sets based on existing benchmark corpora. Our results answered five research questions in measuring the sate-of-the-art parsers' performance on robustness test sets, and evaluating the effect of data augmentation.

* Long Paper, Accepted to EACL2021 
Viaarxiv icon