Alert button
Picture for Qinliang Su

Qinliang Su

Alert button

Unsupervised Hashing with Contrastive Information Bottleneck

May 19, 2021
Zexuan Qiu, Qinliang Su, Zijing Ou, Jianxing Yu, Changyou Chen

Figure 1 for Unsupervised Hashing with Contrastive Information Bottleneck
Figure 2 for Unsupervised Hashing with Contrastive Information Bottleneck
Figure 3 for Unsupervised Hashing with Contrastive Information Bottleneck
Figure 4 for Unsupervised Hashing with Contrastive Information Bottleneck

Many unsupervised hashing methods are implicitly established on the idea of reconstructing the input data, which basically encourages the hashing codes to retain as much information of original data as possible. However, this requirement may force the models spending lots of their effort on reconstructing the unuseful background information, while ignoring to preserve the discriminative semantic information that is more important for the hashing task. To tackle this problem, inspired by the recent success of contrastive learning in learning continuous representations, we propose to adapt this framework to learn binary hashing codes. Specifically, we first propose to modify the objective function to meet the specific requirement of hashing and then introduce a probabilistic binary representation layer into the model to facilitate end-to-end training of the entire model. We further prove the strong connection between the proposed contrastive-learning-based hashing method and the mutual information, and show that the proposed model can be considered under the broader framework of the information bottleneck (IB). Under this perspective, a more general hashing model is naturally obtained. Extensive experimental results on three benchmark image datasets demonstrate that the proposed hashing method significantly outperforms existing baselines.

* IJCAI 2021 
Viaarxiv icon

Syntax-Enhanced Pre-trained Model

Dec 28, 2020
Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong, Wanjun Zhong, Xiaojun Quan, Nan Duan, Daxin Jiang

Figure 1 for Syntax-Enhanced Pre-trained Model
Figure 2 for Syntax-Enhanced Pre-trained Model
Figure 3 for Syntax-Enhanced Pre-trained Model
Figure 4 for Syntax-Enhanced Pre-trained Model

We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens.

Viaarxiv icon

Generative Semantic Hashing Enhanced via Boltzmann Machines

Jun 16, 2020
Lin Zheng, Qinliang Su, Dinghan Shen, Changyou Chen

Figure 1 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 2 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 3 for Generative Semantic Hashing Enhanced via Boltzmann Machines
Figure 4 for Generative Semantic Hashing Enhanced via Boltzmann Machines

Generative semantic hashing is a promising technique for large-scale information retrieval thanks to its fast retrieval speed and small memory footprint. For the tractability of training, existing generative-hashing methods mostly assume a factorized form for the posterior distribution, enforcing independence among the bits of hash codes. From the perspectives of both model representation and code space size, independence is always not the best assumption. In this paper, to introduce correlations among the bits of hash codes, we propose to employ the distribution of Boltzmann machine as the variational posterior. To address the intractability issue of training, we first develop an approximate method to reparameterize the distribution of a Boltzmann machine by augmenting it as a hierarchical concatenation of a Gaussian-like distribution and a Bernoulli distribution. Based on that, an asymptotically-exact lower bound is further derived for the evidence lower bound (ELBO). With these novel techniques, the entire model can be optimized efficiently. Extensive experimental results demonstrate that by effectively modeling correlations among different bits within a hash code, our model can achieve significant performance gains.

Viaarxiv icon

Discretized Bottleneck in VAE: Posterior-Collapse-Free Sequence-to-Sequence Learning

Apr 22, 2020
Yang Zhao, Ping Yu, Suchismit Mahapatra, Qinliang Su, Changyou Chen

Figure 1 for Discretized Bottleneck in VAE: Posterior-Collapse-Free Sequence-to-Sequence Learning
Figure 2 for Discretized Bottleneck in VAE: Posterior-Collapse-Free Sequence-to-Sequence Learning
Figure 3 for Discretized Bottleneck in VAE: Posterior-Collapse-Free Sequence-to-Sequence Learning
Figure 4 for Discretized Bottleneck in VAE: Posterior-Collapse-Free Sequence-to-Sequence Learning

Variational autoencoders (VAEs) are important tools in end-to-end representation learning. VAEs can capture complex data distributions and have been applied extensively in many natural-language-processing (NLP) tasks. However, a common pitfall in sequence-to-sequence learning with VAEs is the posterior-collapse issue in latent space, wherein the model tends to ignore latent variables when a strong auto-regressive decoder is implemented. In this paper, we propose a principled approach to eliminate this issue by applying a discretized bottleneck in the latent space. Specifically, we impose a shared discrete latent space where each input is learned to choose a combination of shared latent atoms as its latent representation. Compared with VAEs employing continuous latent variables, our model endows more promising capability in modeling underlying semantics of discrete sequences and can thus provide more interpretative latent structures. Empirically, we demonstrate the efficiency and effectiveness of our model on a broad range of tasks, including language modeling, unaligned text style transfer, dialog response generation, and neural machine translation.

Viaarxiv icon

Document Hashing with Mixture-Prior Generative Models

Aug 29, 2019
Wei Dong, Qinliang Su, Dinghan Shen, Changyou Chen

Figure 1 for Document Hashing with Mixture-Prior Generative Models
Figure 2 for Document Hashing with Mixture-Prior Generative Models
Figure 3 for Document Hashing with Mixture-Prior Generative Models
Figure 4 for Document Hashing with Mixture-Prior Generative Models

Hashing is promising for large-scale information retrieval tasks thanks to the efficiency of distance evaluation between binary codes. Generative hashing is often used to generate hashing codes in an unsupervised way. However, existing generative hashing methods only considered the use of simple priors, like Gaussian and Bernoulli priors, which limits these methods to further improve their performance. In this paper, two mixture-prior generative models are proposed, under the objective to produce high-quality hashing codes for documents. Specifically, a Gaussian mixture prior is first imposed onto the variational auto-encoder (VAE), followed by a separate step to cast the continuous latent representation of VAE into binary code. To avoid the performance loss caused by the separate casting, a model using a Bernoulli mixture prior is further developed, in which an end-to-end training is admitted by resorting to the straight-through (ST) discrete gradient estimator. Experimental results on several benchmark datasets demonstrate that the proposed methods, especially the one using Bernoulli mixture priors, consistently outperform existing ones by a substantial margin.

* 10 pages, 8 figures, to appear at EMNLP-IJCNLP 2019 
Viaarxiv icon

A Deep Neural Information Fusion Architecture for Textual Network Embeddings

Aug 29, 2019
Zenan Xu, Qinliang Su, Xiaojun Quan, Weijia Zhang

Figure 1 for A Deep Neural Information Fusion Architecture for Textual Network Embeddings
Figure 2 for A Deep Neural Information Fusion Architecture for Textual Network Embeddings
Figure 3 for A Deep Neural Information Fusion Architecture for Textual Network Embeddings
Figure 4 for A Deep Neural Information Fusion Architecture for Textual Network Embeddings

Textual network embeddings aim to learn a low-dimensional representation for every node in the network so that both the structural and textual information from the networks can be well preserved in the representations. Traditionally, the structural and textual embeddings were learned by models that rarely take the mutual influences between them into account. In this paper, a deep neural architecture is proposed to effectively fuse the two kinds of informations into one representation. The novelties of the proposed architecture are manifested in the aspects of a newly defined objective function, the complementary information fusion method for structural and textual features, and the mutual gate mechanism for textual feature extraction. Experimental results show that the proposed model outperforms the comparing methods on all three datasets.

* To appear at EMNLP-IJCNLP 2019 (Conference on Empirical Methods in Natural Language Processing & International Joint Conference on Natural Language Processing 2019) 
Viaarxiv icon

Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms

May 24, 2018
Dinghan Shen, Guoyin Wang, Wenlin Wang, Martin Renqiang Min, Qinliang Su, Yizhe Zhang, Chunyuan Li, Ricardo Henao, Lawrence Carin

Figure 1 for Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Figure 2 for Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Figure 3 for Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
Figure 4 for Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms

Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations. However, there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions. In this paper, we conduct a point-by-point comparative study between Simple Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling operations, relative to word-embedding-based RNN/CNN models. Surprisingly, SWEMs exhibit comparable or even superior performance in the majority of cases considered. Based upon this understanding, we propose two additional pooling strategies over learned word embeddings: (i) a max-pooling operation for improved interpretability; and (ii) a hierarchical pooling operation, which preserves spatial (n-gram) information within text sequences. We present experiments on 17 datasets encompassing three tasks: (i) (long) document classification; (ii) text sequence matching; and (iii) short text tasks, including classification and tagging. The source code and datasets can be obtained from https:// github.com/dinghanshen/SWEM.

* To appear at ACL 2018 (code: https://github.com/dinghanshen/SWEM) 
Viaarxiv icon

NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing

May 14, 2018
Dinghan Shen, Qinliang Su, Paidamoyo Chapfuwa, Wenlin Wang, Guoyin Wang, Lawrence Carin, Ricardo Henao

Figure 1 for NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing
Figure 2 for NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing
Figure 3 for NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing
Figure 4 for NASH: Toward End-to-End Neural Architecture for Generative Semantic Hashing

Semantic hashing has become a powerful paradigm for fast similarity search in many information retrieval systems. While fairly successful, previous techniques generally require two-stage training, and the binary constraints are handled ad-hoc. In this paper, we present an end-to-end Neural Architecture for Semantic Hashing (NASH), where the binary hashing codes are treated as Bernoulli latent variables. A neural variational inference framework is proposed for training, where gradients are directly back-propagated through the discrete latent variable to optimize the hash function. We also draw connections between proposed method and rate-distortion theory, which provides a theoretical foundation for the effectiveness of the proposed framework. Experimental results on three public datasets demonstrate that our method significantly outperforms several state-of-the-art models on both unsupervised and supervised scenarios.

* To appear at ACL 2018 
Viaarxiv icon

Deconvolutional Latent-Variable Model for Text Sequence Matching

Nov 22, 2017
Dinghan Shen, Yizhe Zhang, Ricardo Henao, Qinliang Su, Lawrence Carin

Figure 1 for Deconvolutional Latent-Variable Model for Text Sequence Matching
Figure 2 for Deconvolutional Latent-Variable Model for Text Sequence Matching
Figure 3 for Deconvolutional Latent-Variable Model for Text Sequence Matching
Figure 4 for Deconvolutional Latent-Variable Model for Text Sequence Matching

A latent-variable model is introduced for text matching, inferring sentence representations by jointly optimizing generative and discriminative objectives. To alleviate typical optimization challenges in latent-variable models for text, we employ deconvolutional networks as the sequence decoder (generator), providing learned latent codes with more semantic information and better generalization. Our model, trained in an unsupervised manner, yields stronger empirical predictive performance than a decoder based on Long Short-Term Memory (LSTM), with less parameters and considerably faster training. Further, we apply it to text sequence-matching problems. The proposed model significantly outperforms several strong sentence-encoding baselines, especially in the semi-supervised setting.

* Accepted by AAAI-2018 
Viaarxiv icon