Alert button
Picture for Ta-Chung Chi

Ta-Chung Chi

Alert button

Advancing Regular Language Reasoning in Linear Recurrent Neural Networks

Sep 14, 2023
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky

In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language modeling and long-range modeling while offering rapid parallel training and constant inference costs. With the resurged interest in LRNNs, we study whether they can learn the hidden rules in training sequences, such as the grammatical structures of regular language. We theoretically analyze some existing LRNNs and discover their limitations on regular language. Motivated by the analysis, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Experiments suggest that the proposed model is the only LRNN that can perform length extrapolation on regular language tasks such as Sum, Even Pair, and Modular Arithmetic.

* The first two authors contributed equally to this work 
Viaarxiv icon

Structured Dialogue Discourse Parsing

Jun 26, 2023
Ta-Chung Chi, Alexander I. Rudnicky

Figure 1 for Structured Dialogue Discourse Parsing
Figure 2 for Structured Dialogue Discourse Parsing
Figure 3 for Structured Dialogue Discourse Parsing
Figure 4 for Structured Dialogue Discourse Parsing

Dialogue discourse parsing aims to uncover the internal structure of a multi-participant conversation by finding all the discourse~\emph{links} and corresponding~\emph{relations}. Previous work either treats this task as a series of independent multiple-choice problems, in which the link existence and relations are decoded separately, or the encoding is restricted to only local interaction, ignoring the holistic structural information. In contrast, we propose a principled method that improves upon previous work from two perspectives: encoding and decoding. From the encoding side, we perform structured encoding on the adjacency matrix followed by the matrix-tree learning algorithm, where all discourse links and relations in the dialogue are jointly optimized based on latent tree-level distribution. From the decoding side, we perform structured inference using the modified Chiu-Liu-Edmonds algorithm, which explicitly generates the labeled multi-root non-projective spanning tree that best captures the discourse structure. In addition, unlike in previous work, we do not rely on hand-crafted features; this improves the model's robustness. Experiments show that our method achieves new state-of-the-art, surpassing the previous model by 2.3 on STAC and 1.5 on Molweni (F1 scores). \footnote{Code released at~\url{https://github.com/chijames/structured_dialogue_discourse_parsing}.}

* 9 pages, accepted at SIGDIAL 2022 
Viaarxiv icon

PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

May 24, 2023
Yau-Shian Wang, Ta-Chung Chi, Ruohong Zhang, Yiming Yang

Figure 1 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 2 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 3 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification
Figure 4 for PESCO: Prompt-enhanced Self Contrastive Learning for Zero-shot Text Classification

We present PESCO, a novel contrastive learning framework that substantially improves the performance of zero-shot text classification. We formulate text classification as a neural text matching problem where each document is treated as a query, and the system learns the mapping from each query to the relevant class labels by (1) adding prompts to enhance label matching, and (2) using retrieved labels to enrich the training set in a self-training loop of contrastive learning. PESCO achieves state-of-the-art performance on four benchmark text classification datasets. On DBpedia, we achieve 98.5\% accuracy without any labeled data, which is close to the fully-supervised result. Extensive experiments and analyses show all the components of PESCO are necessary for improving the performance of zero-shot text classification.

* ACL 2023  
* accepted by ACL 2023 
Viaarxiv icon

Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings

May 23, 2023
Ta-Chung Chi, Ting-Han Fan, Li-Wei Chen, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 2 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 3 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Figure 4 for Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings

The use of positional embeddings in transformer language models is widely accepted. However, recent research has called into question the necessity of such embeddings. We further extend this inquiry by demonstrating that a randomly initialized and frozen transformer language model, devoid of positional embeddings, inherently encodes strong positional information through the shrinkage of self-attention variance. To quantify this variance, we derive the underlying distribution of each step within a transformer layer. Through empirical validation using a fully pretrained model, we show that the variance shrinkage effect still persists after extensive gradient updates. Our findings serve to justify the decision to discard positional embeddings and thus facilitate more efficient pretraining of transformer language models.

* Accepted by ACL 2023 
Viaarxiv icon

Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

May 05, 2023
Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 2 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 3 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation
Figure 4 for Transformer Working Memory Enables Regular Language Reasoning and Natural Language Length Extrapolation

Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation.

Viaarxiv icon

Receptive Field Alignment Enables Transformer Length Extrapolation

Dec 20, 2022
Ta-Chung Chi, Ting-Han Fan, Alexander I. Rudnicky

Figure 1 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 2 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 3 for Receptive Field Alignment Enables Transformer Length Extrapolation
Figure 4 for Receptive Field Alignment Enables Transformer Length Extrapolation

Length extrapolation is a desirable property that permits training a transformer language model on short sequences and retaining similar perplexities when the model is tested on substantially longer sequences. A relative positional embedding mechanism applied on the transformer self-attention matrix, ALiBi, demonstrates the length extrapolation property with the widest usage to date. In this paper, we show that ALiBi surprisingly does not utilize tokens further than the training sequence length, which can be explained by its implicit windowed attention effect that aligns the receptive field during training and testing stages. Inspired by ALiBi and the receptive filed alignment hypothesis, we propose another transformer positional embedding design named~\textbf{Sandwich} that uses longer than training sequence length information, and it is a greatly simplified formulation of the earliest proposed Sinusoidal positional embedding. Finally, we show that both ALiBi and Sandwich enable efficient inference thanks to their implicit windowed attention effect.

* Work In progress 
Viaarxiv icon

On Task-Adaptive Pretraining for Dialogue Response Selection

Oct 08, 2022
Tzu-Hsiang Lin, Ta-Chung Chi, Anna Rumshisky

Figure 1 for On Task-Adaptive Pretraining for Dialogue Response Selection
Figure 2 for On Task-Adaptive Pretraining for Dialogue Response Selection
Figure 3 for On Task-Adaptive Pretraining for Dialogue Response Selection
Figure 4 for On Task-Adaptive Pretraining for Dialogue Response Selection

Recent advancements in dialogue response selection (DRS) are based on the \textit{task-adaptive pre-training (TAP)} approach, by first initializing their model with BERT~\cite{devlin-etal-2019-bert}, and adapt to dialogue data with dialogue-specific or fine-grained pre-training tasks. However, it is uncertain whether BERT is the best initialization choice, or whether the proposed dialogue-specific fine-grained learning tasks are actually better than MLM+NSP. This paper aims to verify assumptions made in previous works and understand the source of improvements for DRS. We show that initializing with RoBERTa achieve similar performance as BERT, and MLM+NSP can outperform all previously proposed TAP tasks, during which we also contribute a new state-of-the-art on the Ubuntu corpus. Additional analyses shows that the main source of improvements comes from the TAP step, and that the NSP task is crucial to DRS, different from common NLU tasks.

* 6 pages, 4 figures 
Viaarxiv icon

Training Discrete Deep Generative Models via Gapped Straight-Through Estimator

Jun 15, 2022
Ting-Han Fan, Ta-Chung Chi, Alexander I. Rudnicky, Peter J. Ramadge

Figure 1 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 2 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 3 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator
Figure 4 for Training Discrete Deep Generative Models via Gapped Straight-Through Estimator

While deep generative models have succeeded in image processing, natural language processing, and reinforcement learning, training that involves discrete random variables remains challenging due to the high variance of its gradient estimation process. Monte Carlo is a common solution used in most variance reduction approaches. However, this involves time-consuming resampling and multiple function evaluations. We propose a Gapped Straight-Through (GST) estimator to reduce the variance without incurring resampling overhead. This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax. We determine these properties and show via an ablation study that they are essential. Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks, MNIST-VAE and ListOps.

* Accepted at the International Conference on Machine Learning (ICML) 2022. The first two authors contributed equally 
Viaarxiv icon

KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation

May 20, 2022
Ta-Chung Chi, Ting-Han Fan, Peter J. Ramadge, Alexander I. Rudnicky

Figure 1 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 2 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 3 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation
Figure 4 for KERPLE: Kernelized Relative Positional Embedding for Length Extrapolation

Relative positional embeddings (RPE) have received considerable attention since RPEs effectively model the relative distance among tokens and enable length extrapolation. We propose KERPLE, a framework that generalizes relative position embedding for extrapolation by kernelizing positional differences. We achieve this goal using conditionally positive definite (CPD) kernels, a class of functions known for generalizing distance metrics. To maintain the inner product interpretation of self-attention, we show that a CPD kernel can be transformed into a PD kernel by adding a constant offset. This offset is implicitly absorbed in the Softmax normalization during self-attention. The diversity of CPD kernels allows us to derive various RPEs that enable length extrapolation in a principled way. Experiments demonstrate that the logarithmic variant achieves excellent extrapolation performance on three large language modeling datasets.

* The first two authors contributed equally to this work 
Viaarxiv icon

Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection

Oct 25, 2021
Ta-Chung Chi, Alexander I. Rudnicky

Figure 1 for Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
Figure 2 for Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
Figure 3 for Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection
Figure 4 for Zero-Shot Dialogue Disentanglement by Self-Supervised Entangled Response Selection

Dialogue disentanglement aims to group utterances in a long and multi-participant dialogue into threads. This is useful for discourse analysis and downstream applications such as dialogue response selection, where it can be the first step to construct a clean context/response set. Unfortunately, labeling all~\emph{reply-to} links takes quadratic effort w.r.t the number of utterances: an annotator must check all preceding utterances to identify the one to which the current utterance is a reply. In this paper, we are the first to propose a~\textbf{zero-shot} dialogue disentanglement solution. Firstly, we train a model on a multi-participant response selection dataset harvested from the web which is not annotated; we then apply the trained model to perform zero-shot dialogue disentanglement. Without any labeled data, our model can achieve a cluster F1 score of 25. We also fine-tune the model using various amounts of labeled data. Experiments show that with only 10\% of the data, we achieve nearly the same performance of using the full dataset\footnote{Code is released at \url{https://github.com/chijames/zero_shot_dialogue_disentanglement}}.

* 6 pages, accepted by EMNLP 2021 
Viaarxiv icon