Alert button
Picture for Anca D. Dragan

Anca D. Dragan

Alert button

Quantifying Assistive Robustness Via the Natural-Adversarial Frontier

Add code
Bookmark button
Alert button
Oct 16, 2023
Jerry Zhi-Yang He, Zackory Erickson, Daniel S. Brown, Anca D. Dragan

Viaarxiv icon

Confronting Reward Model Overoptimization with Constrained RLHF

Add code
Bookmark button
Alert button
Oct 10, 2023
Ted Moskovitz, Aaditya K. Singh, DJ Strouse, Tuomas Sandholm, Ruslan Salakhutdinov, Anca D. Dragan, Stephen McAleer

Viaarxiv icon

Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 07, 2023
Jensen Gao, Siddharth Reddy, Glen Berseth, Anca D. Dragan, Sergey Levine

Figure 1 for Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning
Figure 2 for Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning
Figure 3 for Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning
Figure 4 for Bootstrapping Adaptive Human-Machine Interfaces with Offline Reinforcement Learning
Viaarxiv icon

Contextual Reliability: When Different Features Matter in Different Contexts

Add code
Bookmark button
Alert button
Jul 19, 2023
Gaurav Ghosal, Amrith Setlur, Daniel S. Brown, Anca D. Dragan, Aditi Raghunathan

Figure 1 for Contextual Reliability: When Different Features Matter in Different Contexts
Figure 2 for Contextual Reliability: When Different Features Matter in Different Contexts
Figure 3 for Contextual Reliability: When Different Features Matter in Different Contexts
Figure 4 for Contextual Reliability: When Different Features Matter in Different Contexts
Viaarxiv icon

Aligning Robot and Human Representations

Add code
Bookmark button
Alert button
Feb 03, 2023
Andreea Bobu, Andi Peng, Pulkit Agrawal, Julie Shah, Anca D. Dragan

Figure 1 for Aligning Robot and Human Representations
Figure 2 for Aligning Robot and Human Representations
Viaarxiv icon

Benchmarks and Algorithms for Offline Preference-Based Reward Learning

Add code
Bookmark button
Alert button
Jan 03, 2023
Daniel Shin, Anca D. Dragan, Daniel S. Brown

Figure 1 for Benchmarks and Algorithms for Offline Preference-Based Reward Learning
Figure 2 for Benchmarks and Algorithms for Offline Preference-Based Reward Learning
Figure 3 for Benchmarks and Algorithms for Offline Preference-Based Reward Learning
Figure 4 for Benchmarks and Algorithms for Offline Preference-Based Reward Learning
Viaarxiv icon

SIRL: Similarity-based Implicit Representation Learning

Add code
Bookmark button
Alert button
Jan 03, 2023
Andreea Bobu, Yi Liu, Rohin Shah, Daniel S. Brown, Anca D. Dragan

Figure 1 for SIRL: Similarity-based Implicit Representation Learning
Figure 2 for SIRL: Similarity-based Implicit Representation Learning
Figure 3 for SIRL: Similarity-based Implicit Representation Learning
Figure 4 for SIRL: Similarity-based Implicit Representation Learning
Viaarxiv icon

Learning Representations that Enable Generalization in Assistive Tasks

Add code
Bookmark button
Alert button
Dec 05, 2022
Jerry Zhi-Yang He, Aditi Raghunathan, Daniel S. Brown, Zackory Erickson, Anca D. Dragan

Figure 1 for Learning Representations that Enable Generalization in Assistive Tasks
Figure 2 for Learning Representations that Enable Generalization in Assistive Tasks
Figure 3 for Learning Representations that Enable Generalization in Assistive Tasks
Figure 4 for Learning Representations that Enable Generalization in Assistive Tasks
Viaarxiv icon

The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types

Add code
Bookmark button
Alert button
Aug 23, 2022
Gaurav R. Ghosal, Matthew Zurek, Daniel S. Brown, Anca D. Dragan

Figure 1 for The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types
Figure 2 for The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types
Figure 3 for The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types
Figure 4 for The Effect of Modeling Human Rationality Level on Learning Rewards from Multiple Feedback Types
Viaarxiv icon