Alert button
Picture for Andreas Vlachos

Andreas Vlachos

Alert button

Faster Minimum Bayes Risk Decoding with Confidence-based Pruning

Nov 25, 2023
Julius Cheng, Andreas Vlachos

Minimum Bayes risk (MBR) decoding outputs the hypothesis with the highest expected utility over the model distribution for some utility function. It has been shown to improve accuracy over beam search in conditional language generation problems and especially neural machine translation, in both human and automatic evaluations. However, the standard sampling-based algorithm for MBR is substantially more computationally expensive than beam search, requiring a large number of samples as well as a quadratic number of calls to the utility function, limiting its applicability. We describe an algorithm for MBR which gradually grows the number of samples used to estimate the utility while pruning hypotheses that are unlikely to have the highest utility according to confidence estimates obtained with bootstrap sampling. Our method requires fewer samples and drastically reduces the number of calls to the utility function compared to standard MBR while being statistically indistinguishable in terms of accuracy. We demonstrate the effectiveness of our approach in experiments on three language pairs, using chrF++ and COMET as utility/evaluation metrics.

* Updated from EMNLP 2023 version: typo fix, minor math notation change, updated citation 
Viaarxiv icon

Automated Fact-Checking in Dialogue: Are Specialized Models Needed?

Nov 14, 2023
Eric Chamoun, Marzieh Saeidi, Andreas Vlachos

Prior research has shown that typical fact-checking models for stand-alone claims struggle with claims made in dialogues. As a solution, fine-tuning these models on labelled dialogue data has been proposed. However, creating separate models for each use case is impractical, and we show that fine-tuning models for dialogue results in poor performance on typical fact-checking. To overcome this challenge, we present techniques that allow us to use the same models for both dialogue and typical fact-checking. These mainly focus on retrieval adaptation and transforming conversational inputs so that they can be accurately predicted by models trained on stand-alone claims. We demonstrate that a typical fact-checking model incorporating these techniques is competitive with state-of-the-art models fine-tuned for dialogue, while maintaining its accuracy on stand-alone claims.

* Accepted to EMNLP 2023 
Viaarxiv icon

QA-NatVer: Question Answering for Natural Logic-based Fact Verification

Oct 22, 2023
Rami Aly, Marek Strong, Andreas Vlachos

Fact verification systems assess a claim's veracity based on evidence. An important consideration in designing them is faithfulness, i.e. generating explanations that accurately reflect the reasoning of the model. Recent works have focused on natural logic, which operates directly on natural language by capturing the semantic relation of spans between an aligned claim with its evidence via set-theoretic operators. However, these approaches rely on substantial resources for training, which are only available for high-resource languages. To this end, we propose to use question answering to predict natural logic operators, taking advantage of the generalization capabilities of instruction-tuned language models. Thus, we obviate the need for annotated training data while still relying on a deterministic inference system. In a few-shot setting on FEVER, our approach outperforms the best baseline by $4.3$ accuracy points, including a state-of-the-art pre-trained seq2seq natural logic system, as well as a state-of-the-art prompt-based classifier. Our system demonstrates its robustness and portability, achieving competitive performance on a counterfactual dataset and surpassing all approaches without further annotation on a Danish verification dataset. A human evaluation indicates that our approach produces more plausible proofs with fewer erroneous natural logic operators than previous natural logic-based systems.

* EMNLP 2023 
Viaarxiv icon

AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web

May 24, 2023
Michael Schlichtkrull, Zhijiang Guo, Andreas Vlachos

Figure 1 for AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web
Figure 2 for AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web
Figure 3 for AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web
Figure 4 for AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Web

Existing datasets for automated fact-checking have substantial limitations, such as relying on artificial claims, lacking annotations for evidence and intermediate reasoning, or including evidence published after the claim. In this paper we introduce AVeriTeC, a new dataset of 4,568 real-world claims covering fact-checks by 50 different organizations. Each claim is annotated with question-answer pairs supported by evidence available online, as well as textual justifications explaining how the evidence combines to produce a verdict. Through a multi-round annotation process, we avoid common pitfalls including context dependence, evidence insufficiency, and temporal leakage, and reach a substantial inter-annotator agreement of $\kappa=0.619$ on verdicts. We develop a baseline as well as an evaluation scheme for verifying claims through several question-answering steps against the open web.

Viaarxiv icon

The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who

Apr 27, 2023
Michael Schlichtkrull, Nedjma Ousidhoum, Andreas Vlachos

Figure 1 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 2 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 3 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
Figure 4 for The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who

Automated fact-checking is often presented as an epistemic tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation. Nevertheless, few papers thoroughly discuss how. We document this by analysing 100 highly-cited papers, and annotating epistemic elements related to intended use, i.e., means, ends, and stakeholders. We find that narratives leaving out some of these aspects are common, that many papers propose inconsistent means and ends, and that the feasibility of suggested strategies rarely has empirical backing. We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback. Accordingly, we provide several recommendations for thinking and writing about the use of fact-checking artefacts.

Viaarxiv icon

Opening up Minds with Argumentative Dialogues

Jan 16, 2023
Youmna Farag, Charlotte O. Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, Andreas Vlachos

Figure 1 for Opening up Minds with Argumentative Dialogues
Figure 2 for Opening up Minds with Argumentative Dialogues
Figure 3 for Opening up Minds with Argumentative Dialogues
Figure 4 for Opening up Minds with Argumentative Dialogues

Recent research on argumentative dialogues has focused on persuading people to take some action, changing their stance on the topic of discussion, or winning debates. In this work, we focus on argumentative dialogues that aim to open up (rather than change) people's minds to help them become more understanding to views that are unfamiliar or in opposition to their own convictions. To this end, we present a dataset of 183 argumentative dialogues about 3 controversial topics: veganism, Brexit and COVID-19 vaccination. The dialogues were collected using the Wizard of Oz approach, where wizards leverage a knowledge-base of arguments to converse with participants. Open-mindedness is measured before and after engaging in the dialogue using a questionnaire from the psychology literature, and success of the dialogue is measured as the change in the participant's stance towards those who hold opinions different to theirs. We evaluate two dialogue models: a Wikipedia-based and an argument-based model. We show that while both models perform closely in terms of opening up minds, the argument-based model is significantly better on other dialogue properties such as engagement and clarity.

* Findings of EMNLP 2022  
Viaarxiv icon

How to disagree well: Investigating the dispute tactics used on Wikipedia

Dec 16, 2022
Christine de Kock, Tom Stafford, Andreas Vlachos

Figure 1 for How to disagree well: Investigating the dispute tactics used on Wikipedia
Figure 2 for How to disagree well: Investigating the dispute tactics used on Wikipedia
Figure 3 for How to disagree well: Investigating the dispute tactics used on Wikipedia
Figure 4 for How to disagree well: Investigating the dispute tactics used on Wikipedia

Disagreements are frequently studied from the perspective of either detecting toxicity or analysing argument structure. We propose a framework of dispute tactics that unifies these two perspectives, as well as other dialogue acts which play a role in resolving disputes, such as asking questions and providing clarification. This framework includes a preferential ordering among rebuttal-type tactics, ranging from ad hominem attacks to refuting the central argument. Using this framework, we annotate 213 disagreements (3,865 utterances) from Wikipedia Talk pages. This allows us to investigate research questions around the tactics used in disagreements; for instance, we provide empirical validation of the approach to disagreement recommended by Wikipedia. We develop models for multilabel prediction of dispute tactics in an utterance, achieving the best performance with a transformer-based label powerset model. Adding an auxiliary task to incorporate the ordering of rebuttal tactics further yields a statistically significant increase. Finally, we show that these annotations can be used to provide useful additional signals to improve performance on the task of predicting escalation.

* Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing  
* Accepted to EMNLP 2022 (Long paper) 
Viaarxiv icon

Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification

Dec 10, 2022
Rami Aly, Andreas Vlachos

Figure 1 for Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Figure 2 for Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Figure 3 for Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification
Figure 4 for Natural Logic-guided Autoregressive Multi-hop Document Retrieval for Fact Verification

A key component of fact verification is thevevidence retrieval, often from multiple documents. Recent approaches use dense representations and condition the retrieval of each document on the previously retrieved ones. The latter step is performed over all the documents in the collection, requiring storing their dense representations in an index, thus incurring a high memory footprint. An alternative paradigm is retrieve-and-rerank, where documents are retrieved using methods such as BM25, their sentences are reranked, and further documents are retrieved conditioned on these sentences, reducing the memory requirements. However, such approaches can be brittle as they rely on heuristics and assume hyperlinks between documents. We propose a novel retrieve-and-rerank method for multi-hop retrieval, that consists of a retriever that jointly scores documents in the knowledge source and sentences from previously retrieved documents using an autoregressive formulation and is guided by a proof system based on natural logic that dynamically terminates the retrieval process if the evidence is deemed sufficient. This method is competitive with current state-of-the-art methods on FEVER, HoVer and FEVEROUS-S, while using $5$ to $10$ times less memory than competing systems. Evaluation on an adversarial dataset indicates improved stability of our approach compared to commonly deployed threshold-based methods. Finally, the proof system helps humans predict model decisions correctly more often than using the evidence alone.

* EMNLP2022 
Viaarxiv icon

What makes you change your mind? An empirical investigation in online group decision-making conversations

Jul 25, 2022
Georgi Karadzhov, Tom Stafford, Andreas Vlachos

Figure 1 for What makes you change your mind? An empirical investigation in online group decision-making conversations
Figure 2 for What makes you change your mind? An empirical investigation in online group decision-making conversations
Figure 3 for What makes you change your mind? An empirical investigation in online group decision-making conversations
Figure 4 for What makes you change your mind? An empirical investigation in online group decision-making conversations

People leverage group discussions to collaborate in order to solve complex tasks, e.g. in project meetings or hiring panels. By doing so, they engage in a variety of conversational strategies where they try to convince each other of the best approach and ultimately reach a decision. In this work, we investigate methods for detecting what makes someone change their mind. To this end, we leverage a recently introduced dataset containing group discussions of people collaborating to solve a task. To find out what makes someone change their mind, we incorporate various techniques such as neural text classification and language-agnostic change point detection. Evaluation of these methods shows that while the task is not trivial, the best way to approach it is using a language-aware model with learning-to-rank training. Finally, we examine the cues that the models develop as indicative of the cause of a change of mind.

Viaarxiv icon

Policy Compliance Detection via Expression Tree Inference

May 24, 2022
Neema Kotonya, Andreas Vlachos, Majid Yazdani, Lambert Mathias, Marzieh Saeidi

Figure 1 for Policy Compliance Detection via Expression Tree Inference
Figure 2 for Policy Compliance Detection via Expression Tree Inference
Figure 3 for Policy Compliance Detection via Expression Tree Inference
Figure 4 for Policy Compliance Detection via Expression Tree Inference

Policy Compliance Detection (PCD) is a task we encounter when reasoning over texts, e.g. legal frameworks. Previous work to address PCD relies heavily on modeling the task as a special case of Recognizing Textual Entailment. Entailment is applicable to the problem of PCD, however viewing the policy as a single proposition, as opposed to multiple interlinked propositions, yields poor performance and lacks explainability. To address this challenge, more recent proposals for PCD have argued for decomposing policies into expression trees consisting of questions connected with logic operators. Question answering is used to obtain answers to these questions with respect to a scenario. Finally, the expression tree is evaluated in order to arrive at an overall solution. However, this work assumes expression trees are provided by experts, thus limiting its applicability to new policies. In this work, we learn how to infer expression trees automatically from policy texts. We ensure the validity of the inferred trees by introducing constrained decoding using a finite state automaton to ensure the generation of valid trees. We determine through automatic evaluation that 63% of the expression trees generated by our constrained generation model are logically equivalent to gold trees. Human evaluation shows that 88% of trees generated by our model are correct.

Viaarxiv icon