Alert button
Picture for Dylan Hadfield-Menell

Dylan Hadfield-Menell

Alert button

Adversarial Training with Voronoi Constraints

Add code
Bookmark button
Alert button
May 02, 2019
Marc Khoury, Dylan Hadfield-Menell

Figure 1 for Adversarial Training with Voronoi Constraints
Figure 2 for Adversarial Training with Voronoi Constraints
Figure 3 for Adversarial Training with Voronoi Constraints
Figure 4 for Adversarial Training with Voronoi Constraints
Viaarxiv icon

Conservative Agency via Attainable Utility Preservation

Add code
Bookmark button
Alert button
Feb 26, 2019
Alexander Matt Turner, Dylan Hadfield-Menell, Prasad Tadepalli

Figure 1 for Conservative Agency via Attainable Utility Preservation
Figure 2 for Conservative Agency via Attainable Utility Preservation
Figure 3 for Conservative Agency via Attainable Utility Preservation
Figure 4 for Conservative Agency via Attainable Utility Preservation
Viaarxiv icon

The Assistive Multi-Armed Bandit

Add code
Bookmark button
Alert button
Jan 24, 2019
Lawrence Chan, Dylan Hadfield-Menell, Siddhartha Srinivasa, Anca Dragan

Figure 1 for The Assistive Multi-Armed Bandit
Figure 2 for The Assistive Multi-Armed Bandit
Figure 3 for The Assistive Multi-Armed Bandit
Figure 4 for The Assistive Multi-Armed Bandit
Viaarxiv icon

On the Utility of Model Learning in HRI

Add code
Bookmark button
Alert button
Jan 04, 2019
Rohan Choudhury*, Gokul Swamy*, Dylan Hadfield-Menell, Anca Dragan

Figure 1 for On the Utility of Model Learning in HRI
Figure 2 for On the Utility of Model Learning in HRI
Figure 3 for On the Utility of Model Learning in HRI
Figure 4 for On the Utility of Model Learning in HRI
Viaarxiv icon

Human-AI Learning Performance in Multi-Armed Bandits

Add code
Bookmark button
Alert button
Dec 21, 2018
Ravi Pandya, Sandy H. Huang, Dylan Hadfield-Menell, Anca D. Dragan

Figure 1 for Human-AI Learning Performance in Multi-Armed Bandits
Figure 2 for Human-AI Learning Performance in Multi-Armed Bandits
Figure 3 for Human-AI Learning Performance in Multi-Armed Bandits
Figure 4 for Human-AI Learning Performance in Multi-Armed Bandits
Viaarxiv icon

Legible Normativity for AI Alignment: The Value of Silly Rules

Add code
Bookmark button
Alert button
Nov 03, 2018
Dylan Hadfield-Menell, McKane Andrus, Gillian K. Hadfield

Figure 1 for Legible Normativity for AI Alignment: The Value of Silly Rules
Figure 2 for Legible Normativity for AI Alignment: The Value of Silly Rules
Figure 3 for Legible Normativity for AI Alignment: The Value of Silly Rules
Viaarxiv icon

On the Geometry of Adversarial Examples

Add code
Bookmark button
Alert button
Nov 01, 2018
Marc Khoury, Dylan Hadfield-Menell

Figure 1 for On the Geometry of Adversarial Examples
Figure 2 for On the Geometry of Adversarial Examples
Figure 3 for On the Geometry of Adversarial Examples
Figure 4 for On the Geometry of Adversarial Examples
Viaarxiv icon

Active Inverse Reward Design

Add code
Bookmark button
Alert button
Sep 09, 2018
Sören Mindermann, Rohin Shah, Adam Gleave, Dylan Hadfield-Menell

Figure 1 for Active Inverse Reward Design
Figure 2 for Active Inverse Reward Design
Figure 3 for Active Inverse Reward Design
Viaarxiv icon

An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 11, 2018
Dhruv Malik, Malayandi Palaniappan, Jaime F. Fisac, Dylan Hadfield-Menell, Stuart Russell, Anca D. Dragan

Figure 1 for An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning
Figure 2 for An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning
Figure 3 for An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning
Figure 4 for An Efficient, Generalized Bellman Update For Cooperative Inverse Reinforcement Learning
Viaarxiv icon

Simplifying Reward Design through Divide-and-Conquer

Add code
Bookmark button
Alert button
Jun 07, 2018
Ellis Ratner, Dylan Hadfield-Menell, Anca D. Dragan

Figure 1 for Simplifying Reward Design through Divide-and-Conquer
Figure 2 for Simplifying Reward Design through Divide-and-Conquer
Figure 3 for Simplifying Reward Design through Divide-and-Conquer
Figure 4 for Simplifying Reward Design through Divide-and-Conquer
Viaarxiv icon