Alert button
Picture for Haichao Zhu

Haichao Zhu

Alert button

Learning to Tokenize for Generative Retrieval

Apr 09, 2023
Weiwei Sun, Lingyong Yan, Zheng Chen, Shuaiqiang Wang, Haichao Zhu, Pengjie Ren, Zhumin Chen, Dawei Yin, Maarten de Rijke, Zhaochun Ren

Figure 1 for Learning to Tokenize for Generative Retrieval
Figure 2 for Learning to Tokenize for Generative Retrieval
Figure 3 for Learning to Tokenize for Generative Retrieval
Figure 4 for Learning to Tokenize for Generative Retrieval

Conventional document retrieval techniques are mainly based on the index-retrieve paradigm. It is challenging to optimize pipelines based on this paradigm in an end-to-end manner. As an alternative, generative retrieval represents documents as identifiers (docid) and retrieves documents by generating docids, enabling end-to-end modeling of document retrieval tasks. However, it is an open question how one should define the document identifiers. Current approaches to the task of defining document identifiers rely on fixed rule-based docids, such as the title of a document or the result of clustering BERT embeddings, which often fail to capture the complete semantic information of a document. We propose GenRet, a document tokenization learning method to address the challenge of defining document identifiers for generative retrieval. GenRet learns to tokenize documents into short discrete representations (i.e., docids) via a discrete auto-encoding approach. Three components are included in GenRet: (i) a tokenization model that produces docids for documents; (ii) a reconstruction model that learns to reconstruct a document based on a docid; and (iii) a sequence-to-sequence retrieval model that generates relevant document identifiers directly for a designated query. By using an auto-encoding framework, GenRet learns semantic docids in a fully end-to-end manner. We also develop a progressive training scheme to capture the autoregressive nature of docids and to stabilize training. We conduct experiments on the NQ320K, MS MARCO, and BEIR datasets to assess the effectiveness of GenRet. GenRet establishes the new state-of-the-art on the NQ320K dataset. Especially, compared to generative retrieval baselines, GenRet can achieve significant improvements on the unseen documents. GenRet also outperforms comparable baselines on MS MARCO and BEIR, demonstrating the method's generalizability.

Viaarxiv icon

VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion

Jul 12, 2022
Tao He, Tianwen Jiang, Zihao Zheng, Haichao Zhu, Jingrun Zhang, Ming Liu, Sendong Zhao, Bing Qin

Figure 1 for VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion
Figure 2 for VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion
Figure 3 for VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion
Figure 4 for VEM$^2$L: A Plug-and-play Framework for Fusing Text and Structure Knowledge on Sparse Knowledge Graph Completion

Knowledge Graph Completion has been widely studied recently to complete missing elements within triples via mainly modeling graph structural features, but performs sensitive to the sparsity of graph structure. Relevant texts like entity names and descriptions, acting as another expression form for Knowledge Graphs (KGs), are expected to solve this challenge. Several methods have been proposed to utilize both structure and text messages with two encoders, but only achieved limited improvements due to the failure to balance weights between them. And reserving both structural and textual encoders during inference also suffers from heavily overwhelmed parameters. Motivated by Knowledge Distillation, we view knowledge as mappings from input to output probabilities and propose a plug-and-play framework VEM2L over sparse KGs to fuse knowledge extracted from text and structure messages into a unity. Specifically, we partition knowledge acquired by models into two nonoverlapping parts: one part is relevant to the fitting capacity upon training triples, which could be fused by motivating two encoders to learn from each other on training sets; the other reflects the generalization ability upon unobserved queries. And correspondingly, we propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models, during which we also apply graph densification operations to further alleviate the sparse graph problem. By combining these two fusion methods, we propose VEM2L framework finally. Both detailed theoretical evidence, as well as quantitative and qualitative experiments, demonstrates the effectiveness and efficiency of our proposed framework.

* 13 pages, 5 figures 
Viaarxiv icon

Distilled Dual-Encoder Model for Vision-Language Understanding

Dec 16, 2021
Zekun Wang, Wenhui Wang, Haichao Zhu, Ming Liu, Bing Qin, Furu Wei

Figure 1 for Distilled Dual-Encoder Model for Vision-Language Understanding
Figure 2 for Distilled Dual-Encoder Model for Vision-Language Understanding
Figure 3 for Distilled Dual-Encoder Model for Vision-Language Understanding
Figure 4 for Distilled Dual-Encoder Model for Vision-Language Understanding

We propose a cross-modal attention distillation framework to train a dual-encoder model for vision-language understanding tasks, such as visual reasoning and visual question answering. Dual-encoder models have a faster inference speed than fusion-encoder models and enable the pre-computation of images and text during inference. However, the shallow interaction module used in dual-encoder models is insufficient to handle complex vision-language understanding tasks. In order to learn deep interactions of images and text, we introduce cross-modal attention distillation, which uses the image-to-text and text-to-image attention distributions of a fusion-encoder model to guide the training of our dual-encoder model. In addition, we show that applying the cross-modal attention distillation for both pre-training and fine-tuning stages achieves further improvements. Experimental results demonstrate that the distilled dual-encoder model achieves competitive performance for visual reasoning, visual entailment and visual question answering tasks while enjoying a much faster inference speed than fusion-encoder models. Our code and models will be publicly available at https://github.com/kugwzk/Distilled-DualEncoder.

* Work in progress 
Viaarxiv icon

HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation

Jan 23, 2020
Zhaohui Zhang, Shipeng Xie, Mingxiu Chen, Haichao Zhu

Figure 1 for HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation
Figure 2 for HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation
Figure 3 for HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation
Figure 4 for HandAugment: A Simple Data Augmentation Method for Depth-Based 3D Hand Pose Estimation

Hand pose estimation from 3D depth images, has been explored widely using various kinds of techniques in the field of computer vision. Though, deep learning based method improve the performance greatly recently, however, this problem still remains unsolved due to lack of large datasets, like ImageNet or effective data synthesis methods. In this paper, we propose HandAugment, a method to synthesize image data to augment the training process of the neural networks. Our method has two main parts: First, We propose a scheme of two-stage neural networks. This scheme can make the neural networks focus on the hand regions and thus to improve the performance. Second, we introduce a simple and effective method to synthesize data by combining real and synthetic image together in the image space. Finally, we show that our method achieves the first place in the task of depth-based 3D hand pose estimation in HANDS 2019 challenge.

* 7 pages, 6 figures 
Viaarxiv icon

HandAugment: A Simple Data Augmentation for HANDS19 Challenge Task 1 -- Depth-Based 3D Hand Pose Estimation

Jan 03, 2020
Zhaohui Zhang, Shipeng Xie, Mingxiu Chen, Haichao Zhu

Figure 1 for HandAugment: A Simple Data Augmentation for HANDS19 Challenge Task 1 -- Depth-Based 3D Hand Pose Estimation
Figure 2 for HandAugment: A Simple Data Augmentation for HANDS19 Challenge Task 1 -- Depth-Based 3D Hand Pose Estimation
Figure 3 for HandAugment: A Simple Data Augmentation for HANDS19 Challenge Task 1 -- Depth-Based 3D Hand Pose Estimation
Figure 4 for HandAugment: A Simple Data Augmentation for HANDS19 Challenge Task 1 -- Depth-Based 3D Hand Pose Estimation

Hand pose estimation from 3D depth images, has been explored widely using various kinds of techniques in the field of computer vision. However, this problem still remain unsolved. In this paper we present HandAugment, a simple data augmentation for depth-based 3D hand pose estimation. HandAugment consists of two stages of neural networks. The first stage of neural network is used to extract hand patches and estimate the initial hand poses from the depth images in an iteration fashion. This step can help filter out more outlier patches away (e.g., arms and backgrounds). Then the extracted patches and initial hand poses are further feed into the neural network of the second stage to get the final hand poses. This strategy of two stages greatly improves the accuracy of hands pose estimation. Finally, our method achieves the first place in the task of depth-based 3D hand pose estimation in HANDS19 challenge.

* 5 pages, 4 figures 
Viaarxiv icon

Transforming Wikipedia into Augmented Data for Query-Focused Summarization

Nov 08, 2019
Haichao Zhu, Li Dong, Furu Wei, Bing Qin, Ting Liu

Figure 1 for Transforming Wikipedia into Augmented Data for Query-Focused Summarization
Figure 2 for Transforming Wikipedia into Augmented Data for Query-Focused Summarization
Figure 3 for Transforming Wikipedia into Augmented Data for Query-Focused Summarization
Figure 4 for Transforming Wikipedia into Augmented Data for Query-Focused Summarization

The manual construction of a query-focused summarization corpus is costly and timeconsuming. The limited size of existing datasets renders training data-driven summarization models challenging. In this paper, we use Wikipedia to automatically collect a large query-focused summarization dataset (named as WIKIREF) of more than 280,000 examples, which can serve as a means of data augmentation. Moreover, we develop a query-focused summarization model based on BERT to extract summaries from the documents. Experimental results on three DUC benchmarks show that the model pre-trained on WIKIREF has already achieved reasonable performance. After fine-tuning on the specific datasets, the model with data augmentation outperforms the state of the art on the benchmarks.

Viaarxiv icon

Learning to Ask Unanswerable Questions for Machine Reading Comprehension

Jun 14, 2019
Haichao Zhu, Li Dong, Furu Wei, Wenhui Wang, Bing Qin, Ting Liu

Figure 1 for Learning to Ask Unanswerable Questions for Machine Reading Comprehension
Figure 2 for Learning to Ask Unanswerable Questions for Machine Reading Comprehension
Figure 3 for Learning to Ask Unanswerable Questions for Machine Reading Comprehension
Figure 4 for Learning to Ask Unanswerable Questions for Machine Reading Comprehension

Machine reading comprehension with unanswerable questions is a challenging task. In this work, we propose a data augmentation technique by automatically generating relevant unanswerable questions according to an answerable question paired with its corresponding paragraph that contains the answer. We introduce a pair-to-sequence model for unanswerable question generation, which effectively captures the interactions between the question and the paragraph. We also present a way to construct training data for our question generation models by leveraging the existing reading comprehension dataset. Experimental results show that the pair-to-sequence model performs consistently better compared with the sequence-to-sequence baseline. We further use the automatically generated unanswerable questions as a means of data augmentation on the SQuAD 2.0 dataset, yielding 1.9 absolute F1 improvement with BERT-base model and 1.7 absolute F1 improvement with BERT-large model.

* ACL 2019 (long paper) 
Viaarxiv icon

Real-time Deep Video Deinterlacing

Aug 01, 2017
Haichao Zhu, Xueting Liu, Xiangyu Mao, Tien-Tsin Wong

Figure 1 for Real-time Deep Video Deinterlacing
Figure 2 for Real-time Deep Video Deinterlacing
Figure 3 for Real-time Deep Video Deinterlacing
Figure 4 for Real-time Deep Video Deinterlacing

Interlacing is a widely used technique, for television broadcast and video recording, to double the perceived frame rate without increasing the bandwidth. But it presents annoying visual artifacts, such as flickering and silhouette "serration," during the playback. Existing state-of-the-art deinterlacing methods either ignore the temporal information to provide real-time performance but lower visual quality, or estimate the motion for better deinterlacing but with a trade-off of higher computational cost. In this paper, we present the first and novel deep convolutional neural networks (DCNNs) based method to deinterlace with high visual quality and real-time performance. Unlike existing models for super-resolution problems which relies on the translation-invariant assumption, our proposed DCNN model utilizes the temporal information from both the odd and even half frames to reconstruct only the missing scanlines, and retains the given odd and even scanlines for producing the full deinterlaced frames. By further introducing a layer-sharable architecture, our system can achieve real-time performance on a single GPU. Experiments shows that our method outperforms all existing methods, in terms of reconstruction accuracy and computational performance.

* 9 pages, 11 figures 
Viaarxiv icon