Alert button
Picture for Wei Emma Zhang

Wei Emma Zhang

Alert button

When Large Language Models Meet Citation: A Survey

Sep 18, 2023
Yang Zhang, Yufei Wang, Kai Wang, Quan Z. Sheng, Lina Yao, Adnan Mahmood, Wei Emma Zhang, Rongying Zhao

Figure 1 for When Large Language Models Meet Citation: A Survey
Figure 2 for When Large Language Models Meet Citation: A Survey
Figure 3 for When Large Language Models Meet Citation: A Survey
Figure 4 for When Large Language Models Meet Citation: A Survey

Citations in scholarly work serve the essential purpose of acknowledging and crediting the original sources of knowledge that have been incorporated or referenced. Depending on their surrounding textual context, these citations are used for different motivations and purposes. Large Language Models (LLMs) could be helpful in capturing these fine-grained citation information via the corresponding textual context, thereby enabling a better understanding towards the literature. Furthermore, these citations also establish connections among scientific papers, providing high-quality inter-document relationships and human-constructed knowledge. Such information could be incorporated into LLMs pre-training and improve the text representation in LLMs. Therefore, in this paper, we offer a preliminary review of the mutually beneficial relationship between LLMs and citation analysis. Specifically, we review the application of LLMs for in-text citation analysis tasks, including citation classification, citation-based summarization, and citation recommendation. We then summarize the research pertinent to leveraging citation linkage knowledge to improve text representations of LLMs via citation prediction, network structure information, and inter-document relationship. We finally provide an overview of these contemporary methods and put forth potential promising avenues in combining LLMs and citation analysis for further investigation.

Viaarxiv icon

SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series

Sep 06, 2023
Chang George Dong, Liangwei Nathan Zheng, Weitong Chen, Wei Emma Zhang, Lin Yue

Figure 1 for SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series
Figure 2 for SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series
Figure 3 for SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series
Figure 4 for SWAP: Exploiting Second-Ranked Logits for Adversarial Attacks on Time Series

Time series classification (TSC) has emerged as a critical task in various domains, and deep neural models have shown superior performance in TSC tasks. However, these models are vulnerable to adversarial attacks, where subtle perturbations can significantly impact the prediction results. Existing adversarial methods often suffer from over-parameterization or random logit perturbation, hindering their effectiveness. Additionally, increasing the attack success rate (ASR) typically involves generating more noise, making the attack more easily detectable. To address these limitations, we propose SWAP, a novel attacking method for TSC models. SWAP focuses on enhancing the confidence of the second-ranked logits while minimizing the manipulation of other logits. This is achieved by minimizing the Kullback-Leibler divergence between the target logit distribution and the predictive logit distribution. Experimental results demonstrate that SWAP achieves state-of-the-art performance, with an ASR exceeding 50% and an 18% increase compared to existing methods.

* 10 pages, 8 figures 
Viaarxiv icon

Learning to Select the Relevant History Turns in Conversational Question Answering

Aug 04, 2023
Munazza Zaib, Wei Emma Zhang, Quan Z. Sheng, Subhash Sagar, Adnan Mahmood, Yang Zhang

Figure 1 for Learning to Select the Relevant History Turns in Conversational Question Answering
Figure 2 for Learning to Select the Relevant History Turns in Conversational Question Answering
Figure 3 for Learning to Select the Relevant History Turns in Conversational Question Answering
Figure 4 for Learning to Select the Relevant History Turns in Conversational Question Answering

The increasing demand for the web-based digital assistants has given a rapid rise in the interest of the Information Retrieval (IR) community towards the field of conversational question answering (ConvQA). However, one of the critical aspects of ConvQA is the effective selection of conversational history turns to answer the question at hand. The dependency between relevant history selection and correct answer prediction is an intriguing but under-explored area. The selected relevant context can better guide the system so as to where exactly in the passage to look for an answer. Irrelevant context, on the other hand, brings noise to the system, thereby resulting in a decline in the model's performance. In this paper, we propose a framework, DHS-ConvQA (Dynamic History Selection in Conversational Question Answering), that first generates the context and question entities for all the history turns, which are then pruned on the basis of similarity they share in common with the question at hand. We also propose an attention-based mechanism to re-rank the pruned terms based on their calculated weights of how useful they are in answering the question. In the end, we further aid the model by highlighting the terms in the re-ranked conversational history using a binary classification task and keeping the useful terms (predicted as 1) and ignoring the irrelevant terms (predicted as 0). We demonstrate the efficacy of our proposed framework with extensive experimental results on CANARD and QuAC -- the two popularly utilized datasets in ConvQA. We demonstrate that selecting relevant turns works better than rewriting the original question. We also investigate how adding the irrelevant history turns negatively impacts the model's performance and discuss the research challenges that demand more attention from the IR community.

Viaarxiv icon

Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering

Apr 14, 2023
Munazza Zaib, Quan Z. Sheng, Wei Emma Zhang, Adnan Mahmood

Figure 1 for Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Figure 2 for Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Figure 3 for Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering
Figure 4 for Keeping the Questions Conversational: Using Structured Representations to Resolve Dependency in Conversational Question Answering

Having an intelligent dialogue agent that can engage in conversational question answering (ConvQA) is now no longer limited to Sci-Fi movies only and has, in fact, turned into a reality. These intelligent agents are required to understand and correctly interpret the sequential turns provided as the context of the given question. However, these sequential questions are sometimes left implicit and thus require the resolution of some natural language phenomena such as anaphora and ellipsis. The task of question rewriting has the potential to address the challenges of resolving dependencies amongst the contextual turns by transforming them into intent-explicit questions. Nonetheless, the solution of rewriting the implicit questions comes with some potential challenges such as resulting in verbose questions and taking conversational aspect out of the scenario by generating self-contained questions. In this paper, we propose a novel framework, CONVSR (CONVQA using Structured Representations) for capturing and generating intermediate representations as conversational cues to enhance the capability of the QA model to better interpret the incomplete questions. We also deliberate how the strengths of this task could be leveraged in a bid to design more engaging and eloquent conversational agents. We test our model on the QuAC and CANARD datasets and illustrate by experimental results that our proposed framework achieves a better F1 score than the standard question rewriting model.

Viaarxiv icon

Incorporating Knowledge into Document Summarization: an Application of Prefix-Tuning on GPT-2

Jan 31, 2023
Chen Chen, Wei Emma Zhang, Alireza Seyed Shakeri

Figure 1 for Incorporating Knowledge into Document Summarization: an Application of Prefix-Tuning on GPT-2
Figure 2 for Incorporating Knowledge into Document Summarization: an Application of Prefix-Tuning on GPT-2
Figure 3 for Incorporating Knowledge into Document Summarization: an Application of Prefix-Tuning on GPT-2
Figure 4 for Incorporating Knowledge into Document Summarization: an Application of Prefix-Tuning on GPT-2

Despite the great development of document summarization techniques nowadays, factual inconsistencies between the generated summaries and the original text still occur from time to time. This paper proposes a prefix-tuning-based approach that uses a set of trainable continuous prefix prompt together with discrete prompts to aid model generation, which makes a significant impact on both CNN/Daily Mail and XSum summaries generated using GPT-2. The improvements on fact preservation in the generated summaries indicates the effectiveness of adopting this prefix-tuning-based method in knowledge-enhanced document summarization, and also shows a great potential on other natural language processing tasks.

Viaarxiv icon

Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization

Sep 13, 2022
Congbo Ma, Wei Emma Zhang, Pitawelayalage Dasun Dileepa Pitawela, Yutong Qu, Haojie Zhuang, Hu Wang

Figure 1 for Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization
Figure 2 for Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization
Figure 3 for Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization
Figure 4 for Document-aware Positional Encoding and Linguistic-guided Encoding for Abstractive Multi-document Summarization

One key challenge in multi-document summarization is to capture the relations among input documents that distinguish between single document summarization (SDS) and multi-document summarization (MDS). Few existing MDS works address this issue. One effective way is to encode document positional information to assist models in capturing cross-document relations. However, existing MDS models, such as Transformer-based models, only consider token-level positional information. Moreover, these models fail to capture sentences' linguistic structure, which inevitably causes confusions in the generated summaries. Therefore, in this paper, we propose document-aware positional encoding and linguistic-guided encoding that can be fused with Transformer architecture for MDS. For document-aware positional encoding, we introduce a general protocol to guide the selection of document encoding functions. For linguistic-guided encoding, we propose to embed syntactic dependency relations into the dependency relation mask with a simple but effective non-linear encoding learner for feature learning. Extensive experiments show the proposed model can generate summaries with high quality.

Viaarxiv icon

Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations

Apr 29, 2022
Na Liu, Mark Dras, Wei Emma Zhang

Figure 1 for Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Figure 2 for Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Figure 3 for Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Figure 4 for Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations

Although deep neural networks have achieved state-of-the-art performance in various machine learning tasks, adversarial examples, constructed by adding small non-random perturbations to correctly classified inputs, successfully fool highly expressive deep classifiers into incorrect predictions. Approaches to adversarial attacks in natural language tasks have boomed in the last five years using character-level, word-level, phrase-level, or sentence-level textual perturbations. While there is some work in NLP on defending against such attacks through proactive methods, like adversarial training, there is to our knowledge no effective general reactive approaches to defence via detection of textual adversarial examples such as is found in the image processing literature. In this paper, we propose two new reactive methods for NLP to fill this gap, which unlike the few limited application baselines from NLP are based entirely on distribution characteristics of learned representations: we adapt one from the image processing literature (Local Intrinsic Dimensionality (LID)), and propose a novel one (MultiDistance Representation Ensemble Method (MDRE)). Adapted LID and MDRE obtain state-of-the-art results on character-level, word-level, and phrase-level attacks on the IMDB dataset as well as on the later two with respect to the MultiNLI dataset. For future research, we publish our code.

* 13 pages, RepL4NLP 2022 
Viaarxiv icon

Embedding Knowledge for Document Summarization: A Survey

Apr 24, 2022
Yutong Qu, Wei Emma Zhang, Jian Yang, Lingfei Wu, Jia Wu, Xindong Wu

Figure 1 for Embedding Knowledge for Document Summarization: A Survey
Figure 2 for Embedding Knowledge for Document Summarization: A Survey
Figure 3 for Embedding Knowledge for Document Summarization: A Survey

Knowledge-aware methods have boosted a range of Natural Language Processing applications over the last decades. With the gathered momentum, knowledge recently has been pumped into enormous attention in document summarization research. Previous works proved that knowledge-embedded document summarizers excel at generating superior digests, especially in terms of informativeness, coherence, and fact consistency. This paper pursues to present the first systematic survey for the state-of-the-art methodologies that embed knowledge into document summarizers. Particularly, we propose novel taxonomies to recapitulate knowledge and knowledge embeddings under the document summarization view. We further explore how embeddings are generated in learning architectures of document summarization models, especially in deep learning models. At last, we discuss the challenges of this topic and future directions.

* 8 pages, 1 figure 
Viaarxiv icon

Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization

Sep 23, 2021
Congbo Ma, Wei Emma Zhang, Hu Wang, Shubham Gupta, Mingyu Guo

Figure 1 for Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization
Figure 2 for Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization
Figure 3 for Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization
Figure 4 for Incorporating Linguistic Knowledge for Abstractive Multi-document Summarization

Within natural language processing tasks, linguistic knowledge can always serve an important role in assisting the model to learn excel representations and better guide the natural language generation. In this work, we develop a neural network based abstractive multi-document summarization (MDS) model which leverages dependency parsing to capture cross-positional dependencies and grammatical structures. More concretely, we process the dependency information into the linguistic-guided attention mechanism and further fuse it with the multi-head attention for better feature representation. With the help of linguistic signals, sentence-level relations can be correctly captured, thus improving MDS performance. Our model has two versions based on Flat-Transformer and Hierarchical Transformer respectively. Empirical studies on both versions demonstrate that this simple but effective method outperforms existing works on the benchmark dataset. Extensive analyses examine different settings and configurations of the proposed model which provide a good reference to the community.

Viaarxiv icon

Conversational Question Answering: A Survey

Jun 03, 2021
Munazza Zaib, Wei Emma Zhang, Quan Z. Sheng, Adnan Mahmood, Yang Zhang

Figure 1 for Conversational Question Answering: A Survey
Figure 2 for Conversational Question Answering: A Survey
Figure 3 for Conversational Question Answering: A Survey
Figure 4 for Conversational Question Answering: A Survey

Question answering (QA) systems provide a way of querying the information available in various formats including, but not limited to, unstructured and structured data in natural languages. It constitutes a considerable part of conversational artificial intelligence (AI) which has led to the introduction of a special research topic on Conversational Question Answering (CQA), wherein a system is required to understand the given context and then engages in multi-turn QA to satisfy the user's information needs. Whilst the focus of most of the existing research work is subjected to single-turn QA, the field of multi-turn QA has recently grasped attention and prominence owing to the availability of large-scale, multi-turn QA datasets and the development of pre-trained language models. With a good amount of models and research papers adding to the literature every year recently, there is a dire need of arranging and presenting the related work in a unified manner to streamline future research. This survey, therefore, is an effort to present a comprehensive review of the state-of-the-art research trends of CQA primarily based on reviewed papers from 2016-2021. Our findings show that there has been a trend shift from single-turn to multi-turn QA which empowers the field of Conversational AI from different perspectives. This survey is intended to provide an epitome for the research community with the hope of laying a strong foundation for the field of CQA.

Viaarxiv icon