Picture for Animesh Garg

Animesh Garg

Neural Shape Mating: Self-Supervised Object Assembly with Adversarial Shape Priors

Add code
May 30, 2022
Viaarxiv icon

Accelerated Policy Learning with Parallel Differentiable Simulation

Add code
Apr 14, 2022
Figure 1 for Accelerated Policy Learning with Parallel Differentiable Simulation
Figure 2 for Accelerated Policy Learning with Parallel Differentiable Simulation
Figure 3 for Accelerated Policy Learning with Parallel Differentiable Simulation
Figure 4 for Accelerated Policy Learning with Parallel Differentiable Simulation
Viaarxiv icon

Value Gradient weighted Model-Based Reinforcement Learning

Add code
Apr 04, 2022
Figure 1 for Value Gradient weighted Model-Based Reinforcement Learning
Figure 2 for Value Gradient weighted Model-Based Reinforcement Learning
Figure 3 for Value Gradient weighted Model-Based Reinforcement Learning
Figure 4 for Value Gradient weighted Model-Based Reinforcement Learning
Viaarxiv icon

X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval

Add code
Mar 28, 2022
Figure 1 for X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
Figure 2 for X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
Figure 3 for X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
Figure 4 for X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval
Viaarxiv icon

DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting

Add code
Mar 19, 2022
Figure 1 for DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting
Figure 2 for DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting
Figure 3 for DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting
Figure 4 for DiSECt: A Differentiable Simulator for Parameter Inference and Control in Robotic Cutting
Viaarxiv icon

Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning

Add code
Feb 23, 2022
Figure 1 for Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning
Figure 2 for Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning
Figure 3 for Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning
Figure 4 for Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning
Viaarxiv icon

Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics

Add code
Nov 02, 2021
Figure 1 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 2 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 3 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 4 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Viaarxiv icon

Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings

Add code
Oct 30, 2021
Figure 1 for Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings
Figure 2 for Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings
Figure 3 for Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings
Figure 4 for Convergence and Optimality of Policy Gradient Methods in Weakly Smooth Settings
Viaarxiv icon

Reinforcement Learning in Factored Action Spaces using Tensor Decompositions

Add code
Oct 27, 2021
Figure 1 for Reinforcement Learning in Factored Action Spaces using Tensor Decompositions
Figure 2 for Reinforcement Learning in Factored Action Spaces using Tensor Decompositions
Figure 3 for Reinforcement Learning in Factored Action Spaces using Tensor Decompositions
Figure 4 for Reinforcement Learning in Factored Action Spaces using Tensor Decompositions
Viaarxiv icon

Dynamic Bottleneck for Robust Self-Supervised Exploration

Add code
Oct 25, 2021
Figure 1 for Dynamic Bottleneck for Robust Self-Supervised Exploration
Figure 2 for Dynamic Bottleneck for Robust Self-Supervised Exploration
Figure 3 for Dynamic Bottleneck for Robust Self-Supervised Exploration
Figure 4 for Dynamic Bottleneck for Robust Self-Supervised Exploration
Viaarxiv icon