Picture for Dylan Slack

Dylan Slack

Learning Goal-Conditioned Representations for Language Reward Models

Add code
Jul 18, 2024
Viaarxiv icon

A Careful Examination of Large Language Model Performance on Grade School Arithmetic

Add code
May 02, 2024
Figure 1 for A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Figure 2 for A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Figure 3 for A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Figure 4 for A Careful Examination of Large Language Model Performance on Grade School Arithmetic
Viaarxiv icon

Post Hoc Explanations of Language Models Can Improve Language Models

Add code
May 19, 2023
Figure 1 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 2 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 3 for Post Hoc Explanations of Language Models Can Improve Language Models
Figure 4 for Post Hoc Explanations of Language Models Can Improve Language Models
Viaarxiv icon

TABLET: Learning From Instructions For Tabular Data

Add code
Apr 25, 2023
Figure 1 for TABLET: Learning From Instructions For Tabular Data
Figure 2 for TABLET: Learning From Instructions For Tabular Data
Figure 3 for TABLET: Learning From Instructions For Tabular Data
Figure 4 for TABLET: Learning From Instructions For Tabular Data
Viaarxiv icon

TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues

Add code
Jul 08, 2022
Figure 1 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 2 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 3 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Figure 4 for TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues
Viaarxiv icon

SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition

Add code
Feb 10, 2022
Figure 1 for SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition
Figure 2 for SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition
Figure 3 for SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition
Figure 4 for SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition
Viaarxiv icon

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

Add code
Feb 03, 2022
Figure 1 for Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Viaarxiv icon

Feature Attributions and Counterfactual Explanations Can Be Manipulated

Add code
Jun 25, 2021
Figure 1 for Feature Attributions and Counterfactual Explanations Can Be Manipulated
Figure 2 for Feature Attributions and Counterfactual Explanations Can Be Manipulated
Figure 3 for Feature Attributions and Counterfactual Explanations Can Be Manipulated
Figure 4 for Feature Attributions and Counterfactual Explanations Can Be Manipulated
Viaarxiv icon

On the Lack of Robust Interpretability of Neural Text Classifiers

Add code
Jun 08, 2021
Figure 1 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 2 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 3 for On the Lack of Robust Interpretability of Neural Text Classifiers
Figure 4 for On the Lack of Robust Interpretability of Neural Text Classifiers
Viaarxiv icon

Counterfactual Explanations Can Be Manipulated

Add code
Jun 04, 2021
Figure 1 for Counterfactual Explanations Can Be Manipulated
Figure 2 for Counterfactual Explanations Can Be Manipulated
Figure 3 for Counterfactual Explanations Can Be Manipulated
Figure 4 for Counterfactual Explanations Can Be Manipulated
Viaarxiv icon