Alert button
Picture for Wenxuan Zhou

Wenxuan Zhou

Alert button

UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition

Aug 07, 2023
Wenxuan Zhou, Sheng Zhang, Yu Gu, Muhao Chen, Hoifung Poon

Figure 1 for UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
Figure 2 for UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
Figure 3 for UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition
Figure 4 for UniversalNER: Targeted Distillation from Large Language Models for Open Named Entity Recognition

Large language models (LLMs) have demonstrated remarkable generalizability, such as understanding arbitrary entities and relations. Instruction tuning has proven effective for distilling LLMs into more cost-efficient models such as Alpaca and Vicuna. Yet such student models still trail the original LLMs by large margins in downstream applications. In this paper, we explore targeted distillation with mission-focused instruction tuning to train student models that can excel in a broad application class such as open information extraction. Using named entity recognition (NER) for case study, we show how ChatGPT can be distilled into much smaller UniversalNER models for open NER. For evaluation, we assemble the largest NER benchmark to date, comprising 43 datasets across 9 diverse domains such as biomedicine, programming, social media, law, finance. Without using any direct supervision, UniversalNER attains remarkable NER accuracy across tens of thousands of entity types, outperforming general instruction-tuned models such as Alpaca and Vicuna by over 30 absolute F1 points in average. With a tiny fraction of parameters, UniversalNER not only acquires ChatGPT's capability in recognizing arbitrary entity types, but also outperforms its NER accuracy by 7-9 absolute F1 points in average. Remarkably, UniversalNER even outperforms by a large margin state-of-the-art multi-task instruction-tuned systems such as InstructUIE, which uses supervised NER examples. We also conduct thorough ablation studies to assess the impact of various components in our distillation approach. We will release the distillation recipe, data, and UniversalNER models to facilitate future research on targeted distillation.

* Project page: https://universal-ner.github.io/ 
Viaarxiv icon

Robust Natural Language Understanding with Residual Attention Debiasing

May 28, 2023
Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, Muhao Chen

Figure 1 for Robust Natural Language Understanding with Residual Attention Debiasing
Figure 2 for Robust Natural Language Understanding with Residual Attention Debiasing
Figure 3 for Robust Natural Language Understanding with Residual Attention Debiasing
Figure 4 for Robust Natural Language Understanding with Residual Attention Debiasing

Natural language understanding (NLU) models often suffer from unintended dataset biases. Among bias mitigation methods, ensemble-based debiasing methods, especially product-of-experts (PoE), have stood out for their impressive empirical success. However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns. Attention serves as the main media of feature interaction and aggregation in PLMs and plays a crucial role in providing robust prediction. In this paper, we propose REsidual Attention Debiasing (READ), an end-to-end debiasing method that mitigates unintended biases from attention. Experiments on three NLU tasks show that READ significantly improves the performance of BERT-based models on OOD data with shortcuts removed, including +12.9% accuracy on HANS, +11.0% accuracy on FEVER-Symmetric, and +2.7% F1 on PAWS. Detailed analyses demonstrate the crucial role of unbiased attention in robust NLU models and that READ effectively mitigates biases in attention. Code is available at https://github.com/luka-group/READ.

* ACL 2023 Findings 
Viaarxiv icon

Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning

May 24, 2023
Tianqing Fang, Zhaowei Wang, Wenxuan Zhou, Hongming Zhang, Yangqiu Song, Muhao Chen

Figure 1 for Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning
Figure 2 for Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning
Figure 3 for Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning
Figure 4 for Getting Sick After Seeing a Doctor? Diagnosing and Mitigating Knowledge Conflicts in Event Temporal Reasoning

Event temporal reasoning aims at identifying the temporal relations between two or more events. However, knowledge conflicts arise when there is a mismatch between the actual temporal relations of events in the context and the prior knowledge or biases learned by the model. We first systematically define distinct kinds of bias in event temporal reasoning, which include event relation prior bias, tense bias, narrative bias, and dependency bias, as indicators to study knowledge conflicts. To mitigate such event-related knowledge conflict, we introduce a Counterfactual Data Augmentation based method that can be applied to both Pre-trained Language Models (PLMs) and Large Language Models (LLMs) either as additional training data or demonstrations for In-Context Learning. Experiments suggest the importance of mitigating knowledge conflicts in event temporal reasoning tasks for reducing hallucination and highlight the potential of counterfactual data augmentation for improving model performance.

* 13 pages, 1 figure 
Viaarxiv icon

A Causal View of Entity Bias in (Large) Language Models

May 24, 2023
Fei Wang, Wenjie Mo, Yiwei Wang, Wenxuan Zhou, Muhao Chen

Figure 1 for A Causal View of Entity Bias in (Large) Language Models
Figure 2 for A Causal View of Entity Bias in (Large) Language Models
Figure 3 for A Causal View of Entity Bias in (Large) Language Models
Figure 4 for A Causal View of Entity Bias in (Large) Language Models

Entity bias widely affects pretrained (large) language models, causing them to excessively rely on (biased) parametric knowledge to make unfaithful predictions. Although causality-inspired methods have shown great potential to mitigate entity bias, it is hard to precisely estimate the parameters of underlying causal models in practice. The rise of black-box LLMs also makes the situation even worse, because of their inaccessible parameters and uncalibrated logits. To address these problems, we propose a specific structured causal model (SCM) whose parameters are comparatively easier to estimate. Building upon this SCM, we propose causal intervention techniques to mitigate entity bias for both white-box and black-box settings. The proposed causal intervention perturbs the original entity with neighboring entities. This intervention reduces specific biasing information pertaining to the original entity while still preserving sufficient common predictive information from similar entities. When evaluated on the relation extraction task, our training-time intervention significantly improves the F1 score of RoBERTa by 5.7 points on EntRED, in which spurious shortcuts between entities and labels are removed. Meanwhile, our in-context intervention effectively reduces the knowledge conflicts between parametric knowledge and contextual knowledge in GPT-3.5 and improves the F1 score by 9.14 points on a challenging test set derived from Re-TACRED.

* Work in progress 
Viaarxiv icon

EntRED: Benchmarking Relation Extraction with Fewer Shortcuts

May 22, 2023
Yiwei Wang, Bryan Hooi, Fei Wang, Yujun Cai, Yuxuan Liang, Wenxuan Zhou, Jing Tang, Manjuan Duan, Muhao Chen

Figure 1 for EntRED: Benchmarking Relation Extraction with Fewer Shortcuts
Figure 2 for EntRED: Benchmarking Relation Extraction with Fewer Shortcuts
Figure 3 for EntRED: Benchmarking Relation Extraction with Fewer Shortcuts
Figure 4 for EntRED: Benchmarking Relation Extraction with Fewer Shortcuts

Entity names play an effective role in relation extraction (RE) and often influence model performance. As a result, the entity names in the benchmarks' test sets significantly influence the evaluation of RE models. In this work, we find that the standard RE benchmarks' datasets have a large portion of incorrect entity annotations, low entity name diversity, and are prone to have shortcuts from entity names to ground-truth relations. These issues make the standard benchmarks far from reflecting the real-world scenarios. Hence, in this work, we present EntRED, a challenging RE benchmark with reduced shortcuts and higher diversity of entities. To build EntRED, we propose an end-to-end entity replacement pipeline based on causal inference (CI): ERIC. ERIC performs type-constrained replacements on entities to reduce the shortcuts from entity bias to ground-truth relations. ERIC applies CI in two aspects: 1) targeting the instances that need entity replacements, and 2) determining the candidate entities for replacements. We apply ERIC on TACRED to produce EntRED. Our EntRED evaluates whether the RE model can correctly extract the relations from the text instead of relying on entity bias. Empirical results reveal that even the strong RE model has a significant performance drop on EntRED, which memorizes entity name patterns instead of reasoning from the textual context. We release ERIC's source code and the EntRED benchmark at https://github.com/wangywUST/ENTRED.

* arXiv admin note: text overlap with arXiv:2109.05620 by other authors 
Viaarxiv icon

Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation

May 06, 2023
Wenxuan Zhou, Bowen Jiang, Fan Yang, Chris Paxton, David Held

Figure 1 for Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation
Figure 2 for Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation
Figure 3 for Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation
Figure 4 for Learning Hybrid Actor-Critic Maps for 6D Non-Prehensile Manipulation

Manipulating objects without grasping them is an essential component of human dexterity, referred to as non-prehensile manipulation. Non-prehensile manipulation may enable more complex interactions with the objects, but also presents challenges in reasoning about the interactions. In this work, we introduce Hybrid Actor-Critic Maps for Manipulation (HACMan), a reinforcement learning approach for 6D non-prehensile manipulation of objects using point cloud observations. HACMan proposes a temporally-abstracted and spatially-grounded object-centric action representation that consists of selecting a contact location from the object point cloud and a set of motion parameters describing how the robot will move after making contact. We modify an existing off-policy RL algorithm to learn in this hybrid discrete-continuous action representation. We evaluate HACMan on a 6D object pose alignment task in both simulation and in the real world. On the hardest version of our task, with randomized initial pose, randomized 6D goals, and diverse object categories, our policy demonstrates strong generalization to unseen object categories without a performance drop, achieving a 79% success rate on non-flat objects. Compared to alternative action representations, HACMan achieves a success rate more than three times higher than the best baseline. With zero-shot sim2real transfer, our policy can successfully manipulate unseen objects in the real world for challenging non-planar goals, using dynamic and contact-rich non-prehensile skills. Videos can be found on the project website: https://hacman-2023.github.io .

Viaarxiv icon

Context-faithful Prompting for Large Language Models

Mar 20, 2023
Wenxuan Zhou, Sheng Zhang, Hoifung Poon, Muhao Chen

Figure 1 for Context-faithful Prompting for Large Language Models
Figure 2 for Context-faithful Prompting for Large Language Models
Figure 3 for Context-faithful Prompting for Large Language Models
Figure 4 for Context-faithful Prompting for Large Language Models

Large language models (LLMs) encode parametric knowledge about world facts and have shown remarkable performance in knowledge-driven NLP tasks. However, their reliance on parametric knowledge may cause them to overlook contextual cues, leading to incorrect predictions in context-sensitive NLP tasks (e.g., knowledge acquisition tasks). In this paper, we seek to assess and enhance LLMs' contextual faithfulness in two aspects: knowledge conflict and prediction with abstention. We demonstrate that LLMs' faithfulness can be significantly improved using carefully designed prompting strategies. In particular, we identify opinion-based prompts and counterfactual demonstrations as the most effective methods. Opinion-based prompts reframe the context as a narrator's statement and inquire about the narrator's opinions, while counterfactual demonstrations use instances containing false facts to improve faithfulness in knowledge conflict situations. Neither technique requires additional training. We conduct experiments on three datasets of two standard NLP tasks, machine reading comprehension and relation extraction, and the results demonstrate significant improvement in faithfulness to contexts.

* Code and data will be released at https://github.com/wzhouad/context-faithful-llm 
Viaarxiv icon

Continual Contrastive Finetuning Improves Low-Resource Relation Extraction

Dec 21, 2022
Wenxuan Zhou, Sheng Zhang, Tristan Naumann, Muhao Chen, Hoifung Poon

Figure 1 for Continual Contrastive Finetuning Improves Low-Resource Relation Extraction
Figure 2 for Continual Contrastive Finetuning Improves Low-Resource Relation Extraction
Figure 3 for Continual Contrastive Finetuning Improves Low-Resource Relation Extraction
Figure 4 for Continual Contrastive Finetuning Improves Low-Resource Relation Extraction

Relation extraction (RE), which has relied on structurally annotated corpora for model training, has been particularly challenging in low-resource scenarios and domains. Recent literature has tackled low-resource RE by self-supervised learning, where the solution involves pretraining the relation embedding by RE-based objective and finetuning on labeled data by classification-based objective. However, a critical challenge to this approach is the gap in objectives, which prevents the RE model from fully utilizing the knowledge in pretrained representations. In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning. Since in this kind of representation learning paradigm, one relation may easily form multiple clusters in the representation space, we further propose a multi-center contrastive loss that allows one relation to form multiple clusters to better align with pretraining. Experiments on two document-level RE datasets, BioRED and Re-DocRED, demonstrate the effectiveness of our method. Particularly, when using 1% end-task training data, our method outperforms PLM-based RE classifier by 10.5% and 5.8% on the two datasets, respectively.

Viaarxiv icon

Multi-hop Evidence Retrieval for Cross-document Relation Extraction

Dec 21, 2022
Keming Lu, I-Hung Hsu, Wenxuan Zhou, Mingyu Derek Ma, Muhao Chen

Figure 1 for Multi-hop Evidence Retrieval for Cross-document Relation Extraction
Figure 2 for Multi-hop Evidence Retrieval for Cross-document Relation Extraction
Figure 3 for Multi-hop Evidence Retrieval for Cross-document Relation Extraction
Figure 4 for Multi-hop Evidence Retrieval for Cross-document Relation Extraction

Relation Extraction (RE) has been extended to cross-document scenarios because many relations are not simply described in a single document. This inevitably brings the challenge of efficient open-space evidence retrieval to support the inference of cross-document relations, along with the challenge of multi-hop reasoning on top of entities and evidence scattered in an open set of documents. To combat these challenges, we propose Mr.CoD, a multi-hop evidence retrieval method based on evidence path mining and ranking with adapted dense retrievers. We explore multiple variants of retrievers to show evidence retrieval is an essential part in cross-document RE. Experiments on CodRED show that evidence retrieval with Mr.Cod effectively acquires cross-document evidence that essentially supports open-setting cross-document RE. Additionally, we show that Mr.CoD facilitates evidence retrieval and boosts end-to-end RE performance with effective multi-hop reasoning in both closed and open settings of RE.

* Work in progress 
Viaarxiv icon

On-the-fly Denoising for Data Augmentation in Natural Language Understanding

Dec 20, 2022
Tianqing Fang, Wenxuan Zhou, Fangyu Liu, Hongming Zhang, Yangqiu Song, Muhao Chen

Figure 1 for On-the-fly Denoising for Data Augmentation in Natural Language Understanding
Figure 2 for On-the-fly Denoising for Data Augmentation in Natural Language Understanding
Figure 3 for On-the-fly Denoising for Data Augmentation in Natural Language Understanding
Figure 4 for On-the-fly Denoising for Data Augmentation in Natural Language Understanding

Data Augmentation (DA) is frequently used to automatically provide additional training data without extra human annotation. However, data augmentation may introduce noisy data that impairs training. To guarantee the quality of augmented data, existing methods either assume no noise exists in the augmented data and adopt consistency training or use simple heuristics such as training loss and diversity constraints to filter out ``noisy'' data. However, those filtered examples may still contain useful information, and dropping them completely causes loss of supervision signals. In this paper, based on the assumption that the original dataset is cleaner than the augmented data, we propose an on-the-fly denoising technique for data augmentation that learns from soft augmented labels provided by an organic teacher model trained on the cleaner original data. A simple self-regularization module is applied to force the model prediction to be consistent across two distinct dropouts to further prevent overfitting on noisy labels. Our method can be applied to augmentation techniques in general and can consistently improve the performance on both text classification and question-answering tasks.

* 14 pages 
Viaarxiv icon