Picture for Fabio Ramos

Fabio Ramos

NVIDIA, University of Sydney

Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning

Add code
May 26, 2020
Figure 1 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 2 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Figure 3 for Guided Uncertainty-Aware Policy Optimization: Combining Learning and Model-Based Strategies for Sample-Efficient Policy Learning
Viaarxiv icon

Estimating Motion Uncertainty with Bayesian ICP

Add code
Apr 16, 2020
Figure 1 for Estimating Motion Uncertainty with Bayesian ICP
Figure 2 for Estimating Motion Uncertainty with Bayesian ICP
Figure 3 for Estimating Motion Uncertainty with Bayesian ICP
Figure 4 for Estimating Motion Uncertainty with Bayesian ICP
Viaarxiv icon

Intrinsic Exploration as Multi-Objective RL

Add code
Apr 06, 2020
Figure 1 for Intrinsic Exploration as Multi-Objective RL
Figure 2 for Intrinsic Exploration as Multi-Objective RL
Figure 3 for Intrinsic Exploration as Multi-Objective RL
Figure 4 for Intrinsic Exploration as Multi-Objective RL
Viaarxiv icon

Inferring the Material Properties of Granular Media for Robotic Tasks

Add code
Mar 19, 2020
Figure 1 for Inferring the Material Properties of Granular Media for Robotic Tasks
Figure 2 for Inferring the Material Properties of Granular Media for Robotic Tasks
Figure 3 for Inferring the Material Properties of Granular Media for Robotic Tasks
Figure 4 for Inferring the Material Properties of Granular Media for Robotic Tasks
Viaarxiv icon

DISCO: Double Likelihood-free Inference Stochastic Control

Add code
Feb 25, 2020
Figure 1 for DISCO: Double Likelihood-free Inference Stochastic Control
Figure 2 for DISCO: Double Likelihood-free Inference Stochastic Control
Figure 3 for DISCO: Double Likelihood-free Inference Stochastic Control
Viaarxiv icon

Reinforcement Learning with Probabilistically Complete Exploration

Add code
Jan 20, 2020
Figure 1 for Reinforcement Learning with Probabilistically Complete Exploration
Figure 2 for Reinforcement Learning with Probabilistically Complete Exploration
Figure 3 for Reinforcement Learning with Probabilistically Complete Exploration
Figure 4 for Reinforcement Learning with Probabilistically Complete Exploration
Viaarxiv icon

Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training

Add code
Dec 09, 2019
Figure 1 for Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training
Figure 2 for Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training
Figure 3 for Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training
Figure 4 for Semi-supervised Learning Approach to Generate Neuroimaging Modalities with Adversarial Training
Viaarxiv icon

Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environment

Add code
Dec 04, 2019
Figure 1 for Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environment
Figure 2 for Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environment
Figure 3 for Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environment
Figure 4 for Dynamic Hilbert Maps: Real-Time Occupancy Predictions in Changing Environment
Viaarxiv icon

Bayesian Curiosity for Efficient Exploration in Reinforcement Learning

Add code
Nov 20, 2019
Figure 1 for Bayesian Curiosity for Efficient Exploration in Reinforcement Learning
Figure 2 for Bayesian Curiosity for Efficient Exploration in Reinforcement Learning
Figure 3 for Bayesian Curiosity for Efficient Exploration in Reinforcement Learning
Figure 4 for Bayesian Curiosity for Efficient Exploration in Reinforcement Learning
Viaarxiv icon

IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data

Add code
Nov 13, 2019
Figure 1 for IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
Figure 2 for IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
Figure 3 for IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
Figure 4 for IRIS: Implicit Reinforcement without Interaction at Scale for Learning Control from Offline Robot Manipulation Data
Viaarxiv icon