Alert button
Picture for Scott Niekum

Scott Niekum

Alert button

PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards

Add code
Bookmark button
Alert button
Jul 30, 2020
Prasoon Goyal, Scott Niekum, Raymond J. Mooney

Figure 1 for PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
Figure 2 for PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
Figure 3 for PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
Figure 4 for PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to Rewards
Viaarxiv icon

Bayesian Robust Optimization for Imitation Learning

Add code
Bookmark button
Alert button
Jul 24, 2020
Daniel S. Brown, Scott Niekum, Marek Petrik

Figure 1 for Bayesian Robust Optimization for Imitation Learning
Figure 2 for Bayesian Robust Optimization for Imitation Learning
Figure 3 for Bayesian Robust Optimization for Imitation Learning
Figure 4 for Bayesian Robust Optimization for Imitation Learning
Viaarxiv icon

Efficiently Guiding Imitation Learning Algorithms with Human Gaze

Add code
Bookmark button
Alert button
Mar 05, 2020
Akanksha Saran, Ruohan Zhang, Elaine Schaertl Short, Scott Niekum

Figure 1 for Efficiently Guiding Imitation Learning Algorithms with Human Gaze
Figure 2 for Efficiently Guiding Imitation Learning Algorithms with Human Gaze
Figure 3 for Efficiently Guiding Imitation Learning Algorithms with Human Gaze
Figure 4 for Efficiently Guiding Imitation Learning Algorithms with Human Gaze
Viaarxiv icon

Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences

Add code
Bookmark button
Alert button
Feb 21, 2020
Daniel S. Brown, Russell Coleman, Ravi Srinivasan, Scott Niekum

Figure 1 for Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Figure 2 for Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Figure 3 for Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Figure 4 for Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences
Viaarxiv icon

Local Nonparametric Meta-Learning

Add code
Bookmark button
Alert button
Feb 09, 2020
Wonjoon Goo, Scott Niekum

Figure 1 for Local Nonparametric Meta-Learning
Figure 2 for Local Nonparametric Meta-Learning
Figure 3 for Local Nonparametric Meta-Learning
Figure 4 for Local Nonparametric Meta-Learning
Viaarxiv icon

Deep Bayesian Reward Learning from Preferences

Add code
Bookmark button
Alert button
Dec 10, 2019
Daniel S. Brown, Scott Niekum

Figure 1 for Deep Bayesian Reward Learning from Preferences
Figure 2 for Deep Bayesian Reward Learning from Preferences
Figure 3 for Deep Bayesian Reward Learning from Preferences
Figure 4 for Deep Bayesian Reward Learning from Preferences
Viaarxiv icon

Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty

Add code
Bookmark button
Alert button
Jul 21, 2019
Ajinkya Jain, Scott Niekum

Figure 1 for Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty
Figure 2 for Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty
Figure 3 for Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty
Figure 4 for Learning Hybrid Object Kinematics for Efficient Hierarchical Planning Under Uncertainty
Viaarxiv icon

Understanding Teacher Gaze Patterns for Robot Learning

Add code
Bookmark button
Alert button
Jul 16, 2019
Akanksha Saran, Elaine Schaertl Short, Andrea Thomaz, Scott Niekum

Figure 1 for Understanding Teacher Gaze Patterns for Robot Learning
Figure 2 for Understanding Teacher Gaze Patterns for Robot Learning
Figure 3 for Understanding Teacher Gaze Patterns for Robot Learning
Figure 4 for Understanding Teacher Gaze Patterns for Robot Learning
Viaarxiv icon

Ranking-Based Reward Extrapolation without Rankings

Add code
Bookmark button
Alert button
Jul 13, 2019
Daniel S. Brown, Wonjoon Goo, Scott Niekum

Figure 1 for Ranking-Based Reward Extrapolation without Rankings
Figure 2 for Ranking-Based Reward Extrapolation without Rankings
Figure 3 for Ranking-Based Reward Extrapolation without Rankings
Figure 4 for Ranking-Based Reward Extrapolation without Rankings
Viaarxiv icon