Alert button
Picture for Scott Niekum

Scott Niekum

Alert button

A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms

Add code
Bookmark button
Alert button
Jul 11, 2019
Oliver Kroemer, Scott Niekum, George Konidaris

Figure 1 for A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
Figure 2 for A Review of Robot Learning for Manipulation: Challenges, Representations, and Algorithms
Viaarxiv icon

Ranking-Based Reward Extrapolation without Rankings

Add code
Bookmark button
Alert button
Jul 09, 2019
Daniel S. Brown, Wonjoon Goo, Scott Niekum

Figure 1 for Ranking-Based Reward Extrapolation without Rankings
Figure 2 for Ranking-Based Reward Extrapolation without Rankings
Figure 3 for Ranking-Based Reward Extrapolation without Rankings
Figure 4 for Ranking-Based Reward Extrapolation without Rankings
Viaarxiv icon

Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning

Add code
Bookmark button
Alert button
May 27, 2019
Caleb Chuck, Supawit Chockchowwat, Scott Niekum

Figure 1 for Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning
Figure 2 for Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning
Figure 3 for Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning
Figure 4 for Hypothesis-Driven Skill Discovery for Hierarchical Deep Reinforcement Learning
Viaarxiv icon

Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations

Add code
Bookmark button
Alert button
May 14, 2019
Daniel S. Brown, Wonjoon Goo, Prabhat Nagarajan, Scott Niekum

Figure 1 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 2 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 3 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Figure 4 for Extrapolating Beyond Suboptimal Demonstrations via Inverse Reinforcement Learning from Observations
Viaarxiv icon

Uncertainty-Aware Data Aggregation for Deep Imitation Learning

Add code
Bookmark button
Alert button
May 07, 2019
Yuchen Cui, David Isele, Scott Niekum, Kikuo Fujimura

Figure 1 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 2 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 3 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Figure 4 for Uncertainty-Aware Data Aggregation for Deep Imitation Learning
Viaarxiv icon

Using Natural Language for Reward Shaping in Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 05, 2019
Prasoon Goyal, Scott Niekum, Raymond J. Mooney

Figure 1 for Using Natural Language for Reward Shaping in Reinforcement Learning
Figure 2 for Using Natural Language for Reward Shaping in Reinforcement Learning
Figure 3 for Using Natural Language for Reward Shaping in Reinforcement Learning
Figure 4 for Using Natural Language for Reward Shaping in Reinforcement Learning
Viaarxiv icon

Risk-Aware Active Inverse Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 08, 2019
Daniel S. Brown, Yuchen Cui, Scott Niekum

Figure 1 for Risk-Aware Active Inverse Reinforcement Learning
Figure 2 for Risk-Aware Active Inverse Reinforcement Learning
Figure 3 for Risk-Aware Active Inverse Reinforcement Learning
Figure 4 for Risk-Aware Active Inverse Reinforcement Learning
Viaarxiv icon

LAAIR: A Layered Architecture for Autonomous Interactive Robots

Add code
Bookmark button
Alert button
Nov 09, 2018
Yuqian Jiang, Nick Walker, Minkyu Kim, Nicolas Brissonneau, Daniel S. Brown, Justin W. Hart, Scott Niekum, Luis Sentis, Peter Stone

Figure 1 for LAAIR: A Layered Architecture for Autonomous Interactive Robots
Figure 2 for LAAIR: A Layered Architecture for Autonomous Interactive Robots
Figure 3 for LAAIR: A Layered Architecture for Autonomous Interactive Robots
Viaarxiv icon