Alert button
Picture for Hongqiu Wu

Hongqiu Wu

Alert button

Chinese Spelling Correction as Rephrasing Language Model

Aug 17, 2023
Linfeng Liu, Hongqiu Wu, Hai Zhao

Figure 1 for Chinese Spelling Correction as Rephrasing Language Model
Figure 2 for Chinese Spelling Correction as Rephrasing Language Model
Figure 3 for Chinese Spelling Correction as Rephrasing Language Model
Figure 4 for Chinese Spelling Correction as Rephrasing Language Model

This paper studies Chinese Spelling Correction (CSC), which aims to detect and correct potential spelling errors in a given sentence. Current state-of-the-art methods regard CSC as a sequence tagging task and fine-tune BERT-based models on sentence pairs. However, we note a critical flaw in the process of tagging one character to another, that the correction is excessively conditioned on the error. This is opposite from human mindset, where individuals rephrase the complete sentence based on its semantics, rather than solely on the error patterns memorized before. Such a counter-intuitive learning process results in the bottleneck of generalizability and transferability of machine spelling correction. To address this, we propose $Rephrasing Language Modeling$ (ReLM), where the model is trained to rephrase the entire sentence by infilling additional slots, instead of character-to-character tagging. This novel training paradigm achieves the new state-of-the-art results across fine-tuned and zero-shot CSC benchmarks, outperforming previous counterparts by a large margin. Our method also learns transferable language representation when CSC is jointly trained with other tasks.

Viaarxiv icon

Rethinking Masked Language Modeling for Chinese Spelling Correction

May 28, 2023
Hongqiu Wu, Shaohua Zhang, Yuchen Zhang, Hai Zhao

Figure 1 for Rethinking Masked Language Modeling for Chinese Spelling Correction
Figure 2 for Rethinking Masked Language Modeling for Chinese Spelling Correction
Figure 3 for Rethinking Masked Language Modeling for Chinese Spelling Correction
Figure 4 for Rethinking Masked Language Modeling for Chinese Spelling Correction

In this paper, we study Chinese Spelling Correction (CSC) as a joint decision made by two separate models: a language model and an error model. Through empirical analysis, we find that fine-tuning BERT tends to over-fit the error model while under-fit the language model, resulting in poor generalization to out-of-distribution error patterns. Given that BERT is the backbone of most CSC models, this phenomenon has a significant negative impact. To address this issue, we are releasing a multi-domain benchmark LEMON, with higher quality and diversity than existing benchmarks, to allow a comprehensive assessment of the open domain generalization of CSC models. Then, we demonstrate that a very simple strategy, randomly masking 20\% non-error tokens from the input sequence during fine-tuning is sufficient for learning a much better language model without sacrificing the error model. This technique can be applied to any model architecture and achieves new state-of-the-art results on SIGHAN, ECSpell, and LEMON.

* Accepted by ACL'2023 
Viaarxiv icon

Attack Named Entity Recognition by Entity Boundary Interference

May 09, 2023
Yifei Yang, Hongqiu Wu, Hai Zhao

Figure 1 for Attack Named Entity Recognition by Entity Boundary Interference
Figure 2 for Attack Named Entity Recognition by Entity Boundary Interference
Figure 3 for Attack Named Entity Recognition by Entity Boundary Interference
Figure 4 for Attack Named Entity Recognition by Entity Boundary Interference

Named Entity Recognition (NER) is a cornerstone NLP task while its robustness has been given little attention. This paper rethinks the principles of NER attacks derived from sentence classification, as they can easily violate the label consistency between the original and adversarial NER examples. This is due to the fine-grained nature of NER, as even minor word changes in the sentence can result in the emergence or mutation of any entities, resulting in invalid adversarial examples. To this end, we propose a novel one-word modification NER attack based on a key insight, NER models are always vulnerable to the boundary position of an entity to make their decision. We thus strategically insert a new boundary into the sentence and trigger the Entity Boundary Interference that the victim model makes the wrong prediction either on this boundary word or on other words in the sentence. We call this attack Virtual Boundary Attack (ViBA), which is shown to be remarkably effective when attacking both English and Chinese models with a 70%-90% attack success rate on state-of-the-art language models (e.g. RoBERTa, DeBERTa) and also significantly faster than previous methods.

Viaarxiv icon

Toward Adversarial Training on Contextualized Language Representation

May 08, 2023
Hongqiu Wu, Yongxiang Liu, Hanwen Shi, Hai Zhao, Min Zhang

Figure 1 for Toward Adversarial Training on Contextualized Language Representation
Figure 2 for Toward Adversarial Training on Contextualized Language Representation
Figure 3 for Toward Adversarial Training on Contextualized Language Representation
Figure 4 for Toward Adversarial Training on Contextualized Language Representation

Beyond the success story of adversarial training (AT) in the recent text domain on top of pre-trained language models (PLMs), our empirical study showcases the inconsistent gains from AT on some tasks, e.g. commonsense reasoning, named entity recognition. This paper investigates AT from the perspective of the contextualized language representation outputted by PLM encoders. We find the current AT attacks lean to generate sub-optimal adversarial examples that can fool the decoder part but have a minor effect on the encoder. However, we find it necessary to effectively deviate the latter one to allow AT to gain. Based on the observation, we propose simple yet effective \textit{Contextualized representation-Adversarial Training} (CreAT), in which the attack is explicitly optimized to deviate the contextualized representation of the encoder. It allows a global optimization of adversarial examples that can fool the entire model. We also find CreAT gives rise to a better direction to optimize the adversarial examples, to let them less sensitive to hyperparameters. Compared to AT, CreAT produces consistent performance gains on a wider range of tasks and is proven to be more effective for language pre-training where only the encoder part is kept for downstream tasks. We achieve the new state-of-the-art performances on a series of challenging benchmarks, e.g. AdvGLUE (59.1 $ \rightarrow $ 61.1), HellaSWAG (93.0 $ \rightarrow $ 94.9), ANLI (68.1 $ \rightarrow $ 69.3).

Viaarxiv icon

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Oct 19, 2022
Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang

Figure 1 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 2 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 3 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning
Figure 4 for Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios. However, learning multiple training objectives in a single model is challenging due to the unknown relative significance as well as the potential contrariety between them. Empirical studies have shown that the current objective sampling in an ad-hoc manual setting makes the learned language representation barely converge to the desired optimum. Thus, we propose \textit{MOMETAS}, a novel adaptive sampler based on meta-learning, which learns the latent sampling pattern on arbitrary pre-training objectives. Such a design is lightweight with negligible additional training overhead. To validate our approach, we adopt five objectives and conduct continual pre-training with BERT-base and BERT-large models, where MOMETAS demonstrates universal performance gain over other rule-based sampling strategies on 14 natural language processing tasks.

* EMNLP 2022 (findings) 
Viaarxiv icon

Semantic-Preserving Adversarial Code Comprehension

Sep 12, 2022
Yiyang Li, Hongqiu Wu, Hai Zhao

Figure 1 for Semantic-Preserving Adversarial Code Comprehension
Figure 2 for Semantic-Preserving Adversarial Code Comprehension
Figure 3 for Semantic-Preserving Adversarial Code Comprehension
Figure 4 for Semantic-Preserving Adversarial Code Comprehension

Based on the tremendous success of pre-trained language models (PrLMs) for source code comprehension tasks, current literature studies either ways to further improve the performance (generalization) of PrLMs, or their robustness against adversarial attacks. However, they have to compromise on the trade-off between the two aspects and none of them consider improving both sides in an effective and practical way. To fill this gap, we propose Semantic-Preserving Adversarial Code Embeddings (SPACE) to find the worst-case semantic-preserving attacks while forcing the model to predict the correct labels under these worst cases. Experiments and analysis demonstrate that SPACE can stay robust against state-of-the-art attacks while boosting the performance of PrLMs for code.

* Accepted by COLING 2022 
Viaarxiv icon

Adversarial Self-Attention for Language Understanding

Jun 25, 2022
Hongqiu Wu, Hai Zhao

Figure 1 for Adversarial Self-Attention for Language Understanding
Figure 2 for Adversarial Self-Attention for Language Understanding
Figure 3 for Adversarial Self-Attention for Language Understanding
Figure 4 for Adversarial Self-Attention for Language Understanding

An ultimate language system aims at the high generalization and robustness when adapting to diverse scenarios. Unfortunately, the recent white hope pre-trained language models (PrLMs) barely escape from stacking excessive parameters to the over-parameterized Transformer architecture to achieve higher performances. This paper thus proposes \textit{Adversarial Self-Attention} mechanism (ASA), which adversarially reconstructs the Transformer attentions and facilitates model training from contaminated model structures, coupled with a fast and simple implementation for better PrLM building. We conduct comprehensive evaluation across a wide range of tasks on both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gain compared to regular training for longer periods. For fine-tuning, ASA-empowered models consistently outweigh naive models by a large margin considering both generalization and robustness.

Viaarxiv icon

Adversarial Counterfactual Environment Model Learning

Jun 10, 2022
Xiong-Hui Chen, Yang Yu, Zheng-Mao Zhu, Zhihua Yu, Zhenjun Chen, Chenghe Wang, Yinan Wu, Hongqiu Wu, Rong-Jun Qin, Ruijin Ding, Fangsheng Huang

Figure 1 for Adversarial Counterfactual Environment Model Learning
Figure 2 for Adversarial Counterfactual Environment Model Learning
Figure 3 for Adversarial Counterfactual Environment Model Learning
Figure 4 for Adversarial Counterfactual Environment Model Learning

A good model for action-effect prediction, named environment model, is important to achieve sample-efficient decision-making policy learning in many domains like robot control, recommender systems, and patients' treatment selection. We can take unlimited trials with such a model to identify the appropriate actions so that the costs of queries in the real world can be saved. It requires the model to handle unseen data correctly, also called counterfactual data. However, standard data fitting techniques do not automatically achieve such generalization ability and commonly result in unreliable models. In this work, we introduce counterfactual-query risk minimization (CQRM) in model learning for generalizing to a counterfactual dataset queried by a specific target policy. Since the target policies can be various and unknown in policy learning, we propose an adversarial CQRM objective in which the model learns on counterfactual data queried by adversarial policies, and finally derive a tractable solution GALILEO. We also discover that adversarial CQRM is closely related to the adversarial model learning, explaining the effectiveness of the latter. We apply GALILEO in synthetic tasks and a real-world application. The results show that GALILEO makes accurate predictions on counterfactual data and thus significantly improves policies in real-world testing.

Viaarxiv icon

Not All Attention Is All You Need

Apr 10, 2021
Hongqiu Wu, Hai Zhao, Min Zhang

Figure 1 for Not All Attention Is All You Need
Figure 2 for Not All Attention Is All You Need

Self-attention based models have achieved remarkable success in natural language processing. However, the self-attention network design is questioned as suboptimal in recent studies, due to its veiled validity and high redundancy. In this paper, we focus on pre-trained language models with self-pruning training design on task-specific tuning. We demonstrate that the lighter state-of-the-art models with nearly 80% of self-attention layers pruned, may achieve even better results on multiple tasks, including natural language understanding, document classification, named entity recognition and POS tagging, with nearly twice faster inference.

Viaarxiv icon

SIT3: Code Summarization with Structure-Induced Transformer

Dec 29, 2020
Hongqiu Wu, Hai Zhao, Min Zhang

Figure 1 for SIT3: Code Summarization with Structure-Induced Transformer
Figure 2 for SIT3: Code Summarization with Structure-Induced Transformer
Figure 3 for SIT3: Code Summarization with Structure-Induced Transformer
Figure 4 for SIT3: Code Summarization with Structure-Induced Transformer

Code summarization (CS) is becoming a promising area in recent natural language understanding, which aims to generate sensible annotations automatically for source code and is known as programmer oriented. Previous works attempt to apply structure-based traversal (SBT) or non-sequential models like Tree-LSTM and GNN to learn structural program semantics. They both meet the following drawbacks: 1) it is shown ineffective to incorporate SBT into Transformer; 2) it is limited to capture global information through GNN; 3) it is underestimated to capture structural semantics only using Transformer. In this paper, we propose a novel model based on structure-induced self-attention, which encodes sequential inputs with highly-effective structure modeling. Extensive experiments show that our newly-proposed model achieves new state-of-the-art results on popular benchmarks. To our best knowledge, it is the first work on code summarization that uses Transformer to model structural information with high efficiency and no extra parameters. We also provide a tutorial on how we pre-process.

Viaarxiv icon