Picture for Jordan Boyd-Graber

Jordan Boyd-Graber

University of Maryland

A SMART Mnemonic Sounds like "Glue Tonic": Mixing LLMs with Student Feedback to Make Mnemonic Learning Stick

Add code
Jun 21, 2024
Viaarxiv icon

KARL: Knowledge-Aware Retrieval and Representations aid Retention and Learning in Students

Add code
Feb 19, 2024
Viaarxiv icon

Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks

Add code
Jan 29, 2024
Figure 1 for Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks
Figure 2 for Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks
Figure 3 for Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks
Figure 4 for Beyond Automated Evaluation Metrics: Evaluating Topic Models On Practical Social Science Content Analysis Tasks
Viaarxiv icon

CFMatch: Aligning Automated Answer Equivalence Evaluation with Expert Judgments For Open-Domain Question Answering

Add code
Jan 24, 2024
Viaarxiv icon

How the Advent of Ubiquitous Large Language Models both Stymie and Turbocharge Dynamic Adversarial Question Generation

Add code
Jan 20, 2024
Viaarxiv icon

Towards Pragmatic Awareness in Question Answering: A Case Study in Maternal and Infant Health

Add code
Nov 16, 2023
Viaarxiv icon

Labeled Interactive Topic Models

Add code
Nov 15, 2023
Viaarxiv icon

Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines

Add code
Oct 20, 2023
Figure 1 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 2 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 3 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Figure 4 for Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines
Viaarxiv icon

Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong

Add code
Oct 19, 2023
Figure 1 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 2 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 3 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Figure 4 for Large Language Models Help Humans Verify Truthfulness -- Except When They Are Convincingly Wrong
Viaarxiv icon

MegaWika: Millions of reports and their sources across 50 diverse languages

Add code
Jul 13, 2023
Figure 1 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 2 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 3 for MegaWika: Millions of reports and their sources across 50 diverse languages
Figure 4 for MegaWika: Millions of reports and their sources across 50 diverse languages
Viaarxiv icon