Alert button
Picture for Isabelle Augenstein

Isabelle Augenstein

Alert button

Investigating the Impact of Model Instability on Explanations and Uncertainty

Feb 20, 2024
Sara Vera Marjanović, Isabelle Augenstein, Christina Lioma

Viaarxiv icon

Understanding Fine-grained Distortions in Reports of Scientific Findings

Feb 19, 2024
Amelie Wührl, Dustin Wright, Roman Klinger, Isabelle Augenstein

Viaarxiv icon

Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models

Jan 31, 2024
Erik Arakelyan, Zhaoqi Liu, Isabelle Augenstein

Viaarxiv icon

Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective

Nov 30, 2023
Karolina Stańczak, Kevin Du, Adina Williams, Isabelle Augenstein, Ryan Cotterell

Figure 1 for Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective
Figure 2 for Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective
Figure 3 for Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective
Figure 4 for Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective
Viaarxiv icon

Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output

Nov 16, 2023
Yuxia Wang, Revanth Gangi Reddy, Zain Muhammad Mujahid, Arnav Arora, Aleksandr Rubashevskii, Jiahui Geng, Osama Mohammed Afzal, Liangming Pan, Nadav Borenstein, Aditya Pillai, Isabelle Augenstein, Iryna Gurevych, Preslav Nakov

Viaarxiv icon

Social Bias Probing: Fairness Benchmarking for Language Models

Nov 15, 2023
Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein

Figure 1 for Social Bias Probing: Fairness Benchmarking for Language Models
Figure 2 for Social Bias Probing: Fairness Benchmarking for Language Models
Figure 3 for Social Bias Probing: Fairness Benchmarking for Language Models
Figure 4 for Social Bias Probing: Fairness Benchmarking for Language Models
Viaarxiv icon

PHD: Pixel-Based Language Modeling of Historical Documents

Nov 04, 2023
Nadav Borenstein, Phillip Rust, Desmond Elliott, Isabelle Augenstein

Viaarxiv icon

People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection

Nov 02, 2023
Indira Sen, Dennis Assenmacher, Mattia Samory, Isabelle Augenstein, Wil van der Aalst, Claudia Wagne

Viaarxiv icon

Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions

Oct 23, 2023
Lucie-Aimée Kaffee, Arnav Arora, Isabelle Augenstein

Figure 1 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 2 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 3 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 4 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Viaarxiv icon