Picture for Isabelle Augenstein

Isabelle Augenstein

Understanding Fine-grained Distortions in Reports of Scientific Findings

Add code
Feb 19, 2024
Figure 1 for Understanding Fine-grained Distortions in Reports of Scientific Findings
Figure 2 for Understanding Fine-grained Distortions in Reports of Scientific Findings
Figure 3 for Understanding Fine-grained Distortions in Reports of Scientific Findings
Figure 4 for Understanding Fine-grained Distortions in Reports of Scientific Findings
Viaarxiv icon

Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models

Add code
Jan 31, 2024
Figure 1 for Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Figure 2 for Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Figure 3 for Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Figure 4 for Semantic Sensitivities and Inconsistent Predictions: Measuring the Fragility of NLI Models
Viaarxiv icon

Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective

Add code
Nov 30, 2023
Viaarxiv icon

Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output

Add code
Nov 16, 2023
Figure 1 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 2 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 3 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Figure 4 for Factcheck-GPT: End-to-End Fine-Grained Document-Level Fact-Checking and Correction of LLM Output
Viaarxiv icon

Social Bias Probing: Fairness Benchmarking for Language Models

Add code
Nov 15, 2023
Viaarxiv icon

PHD: Pixel-Based Language Modeling of Historical Documents

Add code
Nov 04, 2023
Figure 1 for PHD: Pixel-Based Language Modeling of Historical Documents
Figure 2 for PHD: Pixel-Based Language Modeling of Historical Documents
Figure 3 for PHD: Pixel-Based Language Modeling of Historical Documents
Figure 4 for PHD: Pixel-Based Language Modeling of Historical Documents
Viaarxiv icon

People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection

Add code
Nov 02, 2023
Figure 1 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 2 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 3 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Figure 4 for People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful Language Detection
Viaarxiv icon

Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions

Add code
Oct 23, 2023
Figure 1 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 2 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 3 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Figure 4 for Why Should This Article Be Deleted? Transparent Stance Detection in Multilingual Wikipedia Editor Discussions
Viaarxiv icon

Explaining Interactions Between Text Spans

Add code
Oct 20, 2023
Figure 1 for Explaining Interactions Between Text Spans
Figure 2 for Explaining Interactions Between Text Spans
Figure 3 for Explaining Interactions Between Text Spans
Figure 4 for Explaining Interactions Between Text Spans
Viaarxiv icon

Factuality Challenges in the Era of Large Language Models

Add code
Oct 10, 2023
Viaarxiv icon