Picture for Hannaneh Hajishirzi

Hannaneh Hajishirzi

Shammie

CopyBench: Measuring Literal and Non-Literal Reproduction of Copyright-Protected Text in Language Model Generation

Add code
Jul 09, 2024
Viaarxiv icon

Decoding-Time Language Model Alignment with Multiple Objectives

Add code
Jun 27, 2024
Viaarxiv icon

Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback

Add code
Jun 13, 2024
Viaarxiv icon

OLMES: A Standard for Language Model Evaluations

Add code
Jun 12, 2024
Figure 1 for OLMES: A Standard for Language Model Evaluations
Figure 2 for OLMES: A Standard for Language Model Evaluations
Figure 3 for OLMES: A Standard for Language Model Evaluations
Figure 4 for OLMES: A Standard for Language Model Evaluations
Viaarxiv icon

SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature

Add code
Jun 10, 2024
Viaarxiv icon

Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning

Add code
Jun 10, 2024
Figure 1 for Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Figure 2 for Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Figure 3 for Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Figure 4 for Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning
Viaarxiv icon

Getting it Right: Improving Spatial Consistency in Text-to-Image Models

Add code
Apr 01, 2024
Figure 1 for Getting it Right: Improving Spatial Consistency in Text-to-Image Models
Figure 2 for Getting it Right: Improving Spatial Consistency in Text-to-Image Models
Figure 3 for Getting it Right: Improving Spatial Consistency in Text-to-Image Models
Figure 4 for Getting it Right: Improving Spatial Consistency in Text-to-Image Models
Viaarxiv icon

RewardBench: Evaluating Reward Models for Language Modeling

Add code
Mar 20, 2024
Figure 1 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 2 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 3 for RewardBench: Evaluating Reward Models for Language Modeling
Figure 4 for RewardBench: Evaluating Reward Models for Language Modeling
Viaarxiv icon

Reliable, Adaptable, and Attributable Language Models with Retrieval

Add code
Mar 05, 2024
Figure 1 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 2 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 3 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Figure 4 for Reliable, Adaptable, and Attributable Language Models with Retrieval
Viaarxiv icon

Set the Clock: Temporal Alignment of Pretrained Language Models

Add code
Feb 26, 2024
Viaarxiv icon