Picture for Jordan Boyd-Graber

Jordan Boyd-Graber

University of Maryland

CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering

Add code
Jan 24, 2024
Figure 1 for CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering
Figure 2 for CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering
Figure 3 for CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering
Figure 4 for CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering
Viaarxiv icon

How the Advent of Ubiquitous Large Language Models both Stymie and Turbocharge Dynamic Adversarial Question Generation

Add code
Jan 20, 2024
Viaarxiv icon

Towards Pragmatic Awareness in Question Answering: A Case Study in Maternal and Infant Health

Add code
Nov 16, 2023
Viaarxiv icon

Labeled Interactive Topic Models

Add code
Nov 15, 2023
Figure 1 for Labeled Interactive Topic Models
Figure 2 for Labeled Interactive Topic Models
Figure 3 for Labeled Interactive Topic Models
Figure 4 for Labeled Interactive Topic Models
Viaarxiv icon

Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines

Add code
Oct 20, 2023
Figure 1 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 2 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 3 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 4 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Viaarxiv icon

Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong

Add code
Oct 19, 2023
Figure 1 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 2 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 3 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 4 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Viaarxiv icon

MegaWika: Millions of reports and their sources across 50 diverse languages

Add code
Jul 13, 2023
Viaarxiv icon

Mixture of Prompt Experts for Generalizable and Interpretable Question Answering

Add code
May 24, 2023
Viaarxiv icon

InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction

Add code
May 24, 2023
Figure 1 for InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Figure 2 for InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Figure 3 for InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Figure 4 for InteractiveIE: Towards Assessing the Strength of Human-AI Collaboration in Improving the Performance of Information Extraction
Viaarxiv icon

Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain Question Answering

Add code
Nov 15, 2022
Figure 1 for Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain Question Answering
Figure 2 for Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain Question Answering
Figure 3 for Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain Question Answering
Figure 4 for Cheater's Bowl: Human vs. Computer Search Strategies for Open-Domain Question Answering
Viaarxiv icon