Picture for Peter Stone

Peter Stone

UT Austin, Sony AI

Learning Perceptual Hallucination for Multi-Robot Navigation in Narrow Hallways

Add code
Sep 27, 2022
Figure 1 for Learning Perceptual Hallucination for Multi-Robot Navigation in Narrow Hallways
Figure 2 for Learning Perceptual Hallucination for Multi-Robot Navigation in Narrow Hallways
Figure 3 for Learning Perceptual Hallucination for Multi-Robot Navigation in Narrow Hallways
Figure 4 for Learning Perceptual Hallucination for Multi-Robot Navigation in Narrow Hallways
Viaarxiv icon

BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach

Add code
Sep 19, 2022
Figure 1 for BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Figure 2 for BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Figure 3 for BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Figure 4 for BOME! Bilevel Optimization Made Easy: A Simple First-Order Approach
Viaarxiv icon

Autonomous Ground Navigation in Highly Constrained Spaces: Lessons learned from The BARN Challenge at ICRA 2022

Add code
Aug 22, 2022
Figure 1 for Autonomous Ground Navigation in Highly Constrained Spaces: Lessons learned from The BARN Challenge at ICRA 2022
Figure 2 for Autonomous Ground Navigation in Highly Constrained Spaces: Lessons learned from The BARN Challenge at ICRA 2022
Figure 3 for Autonomous Ground Navigation in Highly Constrained Spaces: Lessons learned from The BARN Challenge at ICRA 2022
Figure 4 for Autonomous Ground Navigation in Highly Constrained Spaces: Lessons learned from The BARN Challenge at ICRA 2022
Viaarxiv icon

Metric Residual Networks for Sample Efficient Goal-conditioned Reinforcement Learning

Add code
Aug 17, 2022
Figure 1 for Metric Residual Networks for Sample Efficient Goal-conditioned Reinforcement Learning
Figure 2 for Metric Residual Networks for Sample Efficient Goal-conditioned Reinforcement Learning
Figure 3 for Metric Residual Networks for Sample Efficient Goal-conditioned Reinforcement Learning
Figure 4 for Metric Residual Networks for Sample Efficient Goal-conditioned Reinforcement Learning
Viaarxiv icon

Causal Dynamics Learning for Task-Independent State Abstraction

Add code
Jun 27, 2022
Figure 1 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 2 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 3 for Causal Dynamics Learning for Task-Independent State Abstraction
Figure 4 for Causal Dynamics Learning for Task-Independent State Abstraction
Viaarxiv icon

Value Function Decomposition for Iterative Design of Reinforcement Learning Agents

Add code
Jun 24, 2022
Figure 1 for Value Function Decomposition for Iterative Design of Reinforcement Learning Agents
Figure 2 for Value Function Decomposition for Iterative Design of Reinforcement Learning Agents
Figure 3 for Value Function Decomposition for Iterative Design of Reinforcement Learning Agents
Figure 4 for Value Function Decomposition for Iterative Design of Reinforcement Learning Agents
Viaarxiv icon

High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization

Add code
Jun 16, 2022
Figure 1 for High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization
Figure 2 for High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization
Figure 3 for High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization
Figure 4 for High-Speed Accurate Robot Control using Learned Forward Kinodynamics and Non-linear Least Squares Optimization
Viaarxiv icon

Models of human preference for learning reward functions

Add code
Jun 05, 2022
Figure 1 for Models of human preference for learning reward functions
Figure 2 for Models of human preference for learning reward functions
Figure 3 for Models of human preference for learning reward functions
Figure 4 for Models of human preference for learning reward functions
Viaarxiv icon

DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution Matching

Add code
Jun 01, 2022
Figure 1 for DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution Matching
Figure 2 for DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution Matching
Figure 3 for DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution Matching
Figure 4 for DM$^2$: Distributed Multi-Agent Reinforcement Learning for Distribution Matching
Viaarxiv icon

COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles

Add code
May 04, 2022
Figure 1 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 2 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 3 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Figure 4 for COOPERNAUT: End-to-End Driving with Cooperative Perception for Networked Vehicles
Viaarxiv icon