Alert button
Picture for Daniel Graves

Daniel Graves

Alert button

Learning robust driving policies without online exploration

Add code
Bookmark button
Alert button
Mar 15, 2021
Daniel Graves, Nhat M. Nguyen, Kimia Hassanzadeh, Jun Jin, Jun Luo

Figure 1 for Learning robust driving policies without online exploration
Figure 2 for Learning robust driving policies without online exploration
Figure 3 for Learning robust driving policies without online exploration
Figure 4 for Learning robust driving policies without online exploration
Viaarxiv icon

Diverse Auto-Curriculum is Critical for Successful Real-World Multiagent Learning Systems

Add code
Bookmark button
Alert button
Feb 16, 2021
Yaodong Yang, Jun Luo, Ying Wen, Oliver Slumbers, Daniel Graves, Haitham Bou Ammar, Jun Wang, Matthew E. Taylor

Viaarxiv icon

LISPR: An Options Framework for Policy Reuse with Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 29, 2020
Daniel Graves, Jun Jin, Jun Luo

Figure 1 for LISPR: An Options Framework for Policy Reuse with Reinforcement Learning
Figure 2 for LISPR: An Options Framework for Policy Reuse with Reinforcement Learning
Figure 3 for LISPR: An Options Framework for Policy Reuse with Reinforcement Learning
Figure 4 for LISPR: An Options Framework for Policy Reuse with Reinforcement Learning
Viaarxiv icon

Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 11, 2020
Jun Jin, Daniel Graves, Cameron Haigh, Jun Luo, Martin Jagersand

Figure 1 for Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning
Figure 2 for Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning
Figure 3 for Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning
Figure 4 for Offline Learning of Counterfactual Perception as Prediction for Real-World Robotic Reinforcement Learning
Viaarxiv icon

SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving

Add code
Bookmark button
Alert button
Nov 01, 2020
Ming Zhou, Jun Luo, Julian Villella, Yaodong Yang, David Rusu, Jiayu Miao, Weinan Zhang, Montgomery Alban, Iman Fadakar, Zheng Chen, Aurora Chongxi Huang, Ying Wen, Kimia Hassanzadeh, Daniel Graves, Dong Chen, Zhengbang Zhu, Nhat Nguyen, Mohamed Elsayed, Kun Shao, Sanjeevan Ahilan, Baokuan Zhang, Jiannan Wu, Zhengang Fu, Kasra Rezaee, Peyman Yadmellat, Mohsen Rohani, Nicolas Perez Nieves, Yihan Ni, Seyedershad Banijamali, Alexander Cowen Rivers, Zheng Tian, Daniel Palenicek, Haitham bou Ammar, Hongbo Zhang, Wulong Liu, Jianye Hao, Jun Wang

Figure 1 for SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving
Figure 2 for SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving
Figure 3 for SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving
Figure 4 for SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for Autonomous Driving
Viaarxiv icon

Affordance as general value function: A computational model

Add code
Bookmark button
Alert button
Oct 27, 2020
Daniel Graves, Johannes Günther, Jun Luo

Figure 1 for Affordance as general value function: A computational model
Figure 2 for Affordance as general value function: A computational model
Figure 3 for Affordance as general value function: A computational model
Figure 4 for Affordance as general value function: A computational model
Viaarxiv icon

What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator

Add code
Bookmark button
Alert button
Oct 19, 2020
Hongyao Tang, Zhaopeng Meng, Jianye HAO, Chen Chen, Daniel Graves, Dong Li, Wulong Liu, Yaodong Yang

Figure 1 for What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
Figure 2 for What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
Figure 3 for What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
Figure 4 for What About Taking Policy as Input of Value Function: Policy-extended Value Function Approximator
Viaarxiv icon

Learning predictive representations in autonomous driving to improve deep reinforcement learning

Add code
Bookmark button
Alert button
Jun 26, 2020
Daniel Graves, Nhat M. Nguyen, Kimia Hassanzadeh, Jun Jin

Figure 1 for Learning predictive representations in autonomous driving to improve deep reinforcement learning
Figure 2 for Learning predictive representations in autonomous driving to improve deep reinforcement learning
Figure 3 for Learning predictive representations in autonomous driving to improve deep reinforcement learning
Figure 4 for Learning predictive representations in autonomous driving to improve deep reinforcement learning
Viaarxiv icon