Picture for Hailey Schoelkopf

Hailey Schoelkopf

The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources

Add code
Jun 26, 2024
Viaarxiv icon

From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models

Add code
Jun 24, 2024
Figure 1 for From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Figure 2 for From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Figure 3 for From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Figure 4 for From Decoding to Meta-Generation: Inference-time Algorithms for Large Language Models
Viaarxiv icon

Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?

Add code
Jun 06, 2024
Figure 1 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 2 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 3 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Figure 4 for Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
Viaarxiv icon

Lessons from the Trenches on Reproducible Evaluation of Language Models

Add code
May 23, 2024
Figure 1 for Lessons from the Trenches on Reproducible Evaluation of Language Models
Figure 2 for Lessons from the Trenches on Reproducible Evaluation of Language Models
Figure 3 for Lessons from the Trenches on Reproducible Evaluation of Language Models
Figure 4 for Lessons from the Trenches on Reproducible Evaluation of Language Models
Viaarxiv icon

Social Choice for AI Alignment: Dealing with Diverse Human Feedback

Add code
Apr 16, 2024
Figure 1 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 2 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 3 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Figure 4 for Social Choice for AI Alignment: Dealing with Diverse Human Feedback
Viaarxiv icon

Suppressing Pink Elephants with Direct Principle Feedback

Add code
Feb 13, 2024
Figure 1 for Suppressing Pink Elephants with Direct Principle Feedback
Figure 2 for Suppressing Pink Elephants with Direct Principle Feedback
Figure 3 for Suppressing Pink Elephants with Direct Principle Feedback
Figure 4 for Suppressing Pink Elephants with Direct Principle Feedback
Viaarxiv icon

Llemma: An Open Language Model For Mathematics

Add code
Oct 16, 2023
Figure 1 for Llemma: An Open Language Model For Mathematics
Figure 2 for Llemma: An Open Language Model For Mathematics
Figure 3 for Llemma: An Open Language Model For Mathematics
Figure 4 for Llemma: An Open Language Model For Mathematics
Viaarxiv icon

GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration

Add code
Jun 02, 2023
Figure 1 for GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration
Figure 2 for GAIA Search: Hugging Face and Pyserini Interoperability for NLP Training Data Exploration
Viaarxiv icon

StarCoder: may the source be with you!

Add code
May 09, 2023
Figure 1 for StarCoder: may the source be with you!
Figure 2 for StarCoder: may the source be with you!
Figure 3 for StarCoder: may the source be with you!
Figure 4 for StarCoder: may the source be with you!
Viaarxiv icon

Emergent and Predictable Memorization in Large Language Models

Add code
Apr 21, 2023
Figure 1 for Emergent and Predictable Memorization in Large Language Models
Figure 2 for Emergent and Predictable Memorization in Large Language Models
Figure 3 for Emergent and Predictable Memorization in Large Language Models
Figure 4 for Emergent and Predictable Memorization in Large Language Models
Viaarxiv icon