Alert button
Picture for Ruohong Zhang

Ruohong Zhang

Alert button

PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

May 24, 2023
Yau-Shian Wang, Ta-Chung Chi, Ruohong Zhang, Yiming Yang

Figure 1 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 2 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 3 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 4 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text matching problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label matching, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5\% accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification.

* ACL 2023  
* accepted by ACL 2023 
Viaarxiv icon

Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT

Apr 24, 2023
Ruohong Zhang, Yau-Shian Wang, Yiming Yang

Figure 1 for Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT
Figure 2 for Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT
Figure 3 for Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT
Figure 4 for Generation-driven Contrastive Self-training for Zero-shot Text Classification with Instruction-tuned GPT

Moreover, GPT-based zero-shot classification models tend to make independent predictions over test instances, which can be sub-optimal as the instance correlations and the decision boundaries in the target space are ignored. To address these difficulties and limitations, we propose a new approach to zero-shot text classification, namely \ourmodelshort, which leverages the strong generative power of GPT to assist in training a smaller, more adaptable, and efficient sentence encoder classifier with contrastive self-training. Specifically, GenCo applies GPT in two ways: firstly, it generates multiple augmented texts for each input instance to enhance the semantic embedding of the instance and improve the mapping to relevant labels; secondly, it generates augmented texts conditioned on the predicted label during self-training, which makes the generative process tailored to the decision boundaries in the target space. In our experiments, GenCo outperforms previous state-of-the-art methods on multiple benchmark datasets, even when only limited in-domain text data is available.

Viaarxiv icon

Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions

Apr 02, 2022
Ruohong Zhang, Yau-Shian Wang, Yiming Yang, Donghan Yu, Tom Vu, Likun Lei

Figure 1 for Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions
Figure 2 for Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions
Figure 3 for Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions
Figure 4 for Long-tailed Extreme Multi-label Text Classification with Generated Pseudo Label Descriptions

Extreme Multi-label Text Classification (XMTC) has been a tough challenge in machine learning research and applications due to the sheer sizes of the label spaces and the severe data scarce problem associated with the long tail of rare labels in highly skewed distributions. This paper addresses the challenge of tail label prediction by proposing a novel approach, which combines the effectiveness of a trained bag-of-words (BoW) classifier in generating informative label descriptions under severe data scarce conditions, and the power of neural embedding based retrieval models in mapping input documents (as queries) to relevant label descriptions. The proposed approach achieves state-of-the-art performance on XMTC benchmark datasets and significantly outperforms the best methods so far in the tail label prediction. We also provide a theoretical analysis for relating the BoW and neural models w.r.t. performance lower bound.

Viaarxiv icon

Exploiting Local and Global Features in Transformer-based Extreme Multi-label Text Classification

Apr 02, 2022
Ruohong Zhang, Yau-Shian Wang, Yiming Yang, Tom Vu, Likun Lei

Figure 1 for Exploiting Local and Global Features in Transformer-based Extreme Multi-label Text Classification
Figure 2 for Exploiting Local and Global Features in Transformer-based Extreme Multi-label Text Classification
Figure 3 for Exploiting Local and Global Features in Transformer-based Extreme Multi-label Text Classification
Figure 4 for Exploiting Local and Global Features in Transformer-based Extreme Multi-label Text Classification

Extreme multi-label text classification (XMTC) is the task of tagging each document with the relevant labels from a very large space of predefined categories. Recently, large pre-trained Transformer models have made significant performance improvements in XMTC, which typically use the embedding of the special CLS token to represent the entire document semantics as a global feature vector, and match it against candidate labels. However, we argue that such a global feature vector may not be sufficient to represent different granularity levels of semantics in the document, and that complementing it with the local word-level features could bring additional gains. Based on this insight, we propose an approach that combines both the local and global features produced by Transformer models to improve the prediction power of the classifier. Our experiments show that the proposed model either outperforms or is comparable to the state-of-the-art methods on benchmark datasets.

Viaarxiv icon

Generalized Multi-Relational Graph Convolution Network

Jun 12, 2020
Donghan Yu, Yiming Yang, Ruohong Zhang, Yuexin Wu

Figure 1 for Generalized Multi-Relational Graph Convolution Network
Figure 2 for Generalized Multi-Relational Graph Convolution Network
Figure 3 for Generalized Multi-Relational Graph Convolution Network
Figure 4 for Generalized Multi-Relational Graph Convolution Network

Graph Convolutional Networks (GCNs) have received increasing attention in recent machine learning. How to effectively leverage the rich structural information in complex graphs, such as knowledge graphs with heterogeneous types of entities and relations, is a primary open challenge in the field. Most GCN methods are either restricted to graphs with a homogeneous type of edges (e.g., citation links only), or focusing on representation learning for nodes only instead of jointly optimizing the embeddings of both nodes and edges for target-driven objectives. This paper addresses these limitations by proposing a novel framework, namely the GEneralized Multi-relational Graph Convolutional Networks (GEM-GCN), which combines the power of GCNs in graph-based belief propagation and the strengths of advanced knowledge-base embedding methods, and goes beyond. Our theoretical analysis shows that GEM-GCN offers an elegant unification of several well-known GCN methods as specific cases, with a new perspective of graph convolution. Experimental results on benchmark datasets show the advantageous performance of GEM-GCN over strong baseline methods in the tasks of knowledge graph alignment and entity classification.

Viaarxiv icon

Explainable Unsupervised Change-point Detection via Graph Neural Networks

Apr 24, 2020
Ruohong Zhang, Yu Hao, Donghan Yu, Wei-Cheng Chang, Guokun Lai, Yiming Yang

Figure 1 for Explainable Unsupervised Change-point Detection via Graph Neural Networks
Figure 2 for Explainable Unsupervised Change-point Detection via Graph Neural Networks
Figure 3 for Explainable Unsupervised Change-point Detection via Graph Neural Networks
Figure 4 for Explainable Unsupervised Change-point Detection via Graph Neural Networks

Change-point detection (CPD) aims at detecting the abrupt property changes lying behind time series data. The property changes in a multivariate time series often result from highly entangled reasons, ranging from independent changes of variables to correlation changes between variables. Learning to uncover the reasons behind the changes in an unsupervised setting is a new and challenging task. Previous CPD methods usually detect change-points by a divergence estimation of statistical features, without delving into the reasons behind the detected changes. In this paper, we propose a correlation-aware dynamics model which separately predicts the correlation change and independent change by incorporating graph neural networks into the encoder-decoder framework. Through experiments on synthetic and real-world datasets, we demonstrate the enhanced performance of our model on the CPD tasks as well as its ability to interpret the nature and degree of the predicted changes.

* 15 pages, 8 figures 
Viaarxiv icon

Graph-Revised Convolutional Network

Nov 17, 2019
Donghan Yu, Ruohong Zhang, Zhengbao Jiang, Yuexin Wu, Yiming Yang

Figure 1 for Graph-Revised Convolutional Network
Figure 2 for Graph-Revised Convolutional Network
Figure 3 for Graph-Revised Convolutional Network
Figure 4 for Graph-Revised Convolutional Network

Graph Convolutional Networks (GCNs) have received increasing attention in the machine learning community for effectively leveraging both the content features of nodes and the linkage patterns across graphs in various applications. As real-world graphs are often incomplete and noisy, treating them as ground-truth information, which is a common practice in most GCNs, unavoidably leads to sub-optimal solutions. Existing efforts for addressing this problem either involve an over-parameterized model which is difficult to scale, or simply re-weight observed edges without dealing with the missing-edge issue. This paper proposes a novel framework called Graph-Revised Convolutional Network (GRCN), which avoids both extremes. Specifically, a GCN-based graph revision module is introduced for predicting missing edges and revising edge weights w.r.t. downstream tasks via joint optimization. A theoretical analysis reveals the connection between GRCN and previous work on multigraph belief propagation. Experiments on six benchmark datasets show that GRCN consistently outperforms strong baseline methods by a large margin, especially when the original graphs are severely incomplete or the labeled instances for model training are highly sparse.

Viaarxiv icon