Picture for Noah A. Smith

Noah A. Smith

Paul G. Allen School of Computer Science & Engineering, University of Washington, Allen Institute for Artificial Intelligence

Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG

Add code
Jun 18, 2024
Figure 1 for Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG
Figure 2 for Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG
Figure 3 for Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG
Figure 4 for Evaluating $n$-Gram Novelty of Language Models Using Rusty-DAWG
Viaarxiv icon

Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback

Add code
Jun 13, 2024
Viaarxiv icon

What Can Natural Language Processing Do for Peer Review?

Add code
May 10, 2024
Figure 1 for What Can Natural Language Processing Do for Peer Review?
Figure 2 for What Can Natural Language Processing Do for Peer Review?
Figure 3 for What Can Natural Language Processing Do for Peer Review?
Figure 4 for What Can Natural Language Processing Do for Peer Review?
Viaarxiv icon

Learning Syntax Without Planting Trees: Understanding When and Why Transformers Generalize Hierarchically

Add code
Apr 25, 2024
Viaarxiv icon

BLINK: Multimodal Large Language Models Can See but Not Perceive

Add code
Apr 18, 2024
Figure 1 for BLINK: Multimodal Large Language Models Can See but Not Perceive
Figure 2 for BLINK: Multimodal Large Language Models Can See but Not Perceive
Figure 3 for BLINK: Multimodal Large Language Models Can See but Not Perceive
Figure 4 for BLINK: Multimodal Large Language Models Can See but Not Perceive
Viaarxiv icon

A Taxonomy of Ambiguity Types for NLP

Add code
Mar 21, 2024
Figure 1 for A Taxonomy of Ambiguity Types for NLP
Viaarxiv icon

RewardBench: Evaluating Reward Models for Language Modeling

Add code
Mar 20, 2024
Viaarxiv icon

Third-Party Language Model Performance Prediction from Instruction

Add code
Mar 19, 2024
Viaarxiv icon

Encode Once and Decode in Parallel: Efficient Transformer Decoding

Add code
Mar 19, 2024
Figure 1 for Encode Once and Decode in Parallel: Efficient Transformer Decoding
Figure 2 for Encode Once and Decode in Parallel: Efficient Transformer Decoding
Figure 3 for Encode Once and Decode in Parallel: Efficient Transformer Decoding
Figure 4 for Encode Once and Decode in Parallel: Efficient Transformer Decoding
Viaarxiv icon

Set the Clock: Temporal Alignment of Pretrained Language Models

Add code
Feb 26, 2024
Viaarxiv icon