Alert button
Picture for Abhishek Gupta

Abhishek Gupta

Alert button

Montreal AI Ethics Institute

Learning in Sinusoidal Spaces with Physics-Informed Neural Networks

Add code
Bookmark button
Alert button
Sep 20, 2021
Jian Cheng Wong, Chinchun Ooi, Abhishek Gupta, Yew-Soon Ong

Figure 1 for Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
Figure 2 for Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
Figure 3 for Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
Figure 4 for Learning in Sinusoidal Spaces with Physics-Informed Neural Networks
Viaarxiv icon

The State of AI Ethics Report (Volume 5)

Add code
Bookmark button
Alert button
Aug 09, 2021
Abhishek Gupta, Connor Wright, Marianna Bergamaschi Ganapini, Masa Sweidan, Renjie Butalid

Viaarxiv icon

Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation

Add code
Bookmark button
Alert button
Aug 03, 2021
Charles Sun, Jędrzej Orbik, Coline Devin, Brian Yang, Abhishek Gupta, Glen Berseth, Sergey Levine

Figure 1 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 2 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 3 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Figure 4 for Fully Autonomous Real-World Reinforcement Learning for Mobile Manipulation
Viaarxiv icon

ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors

Add code
Bookmark button
Alert button
Jul 28, 2021
Charles Sun, Jędrzej Orbik, Coline Devin, Brian Yang, Abhishek Gupta, Glen Berseth, Sergey Levine

Figure 1 for ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors
Figure 2 for ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors
Figure 3 for ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors
Figure 4 for ReLMM: Practical RL for Learning Mobile Manipulation Skills Using Only Onboard Sensors
Viaarxiv icon

Persistent Reinforcement Learning via Subgoal Curricula

Add code
Bookmark button
Alert button
Jul 27, 2021
Archit Sharma, Abhishek Gupta, Sergey Levine, Karol Hausman, Chelsea Finn

Figure 1 for Persistent Reinforcement Learning via Subgoal Curricula
Figure 2 for Persistent Reinforcement Learning via Subgoal Curricula
Figure 3 for Persistent Reinforcement Learning via Subgoal Curricula
Figure 4 for Persistent Reinforcement Learning via Subgoal Curricula
Viaarxiv icon

MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 18, 2021
Kevin Li, Abhishek Gupta, Ashwin Reddy, Vitchyr Pong, Aurick Zhou, Justin Yu, Sergey Levine

Figure 1 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 2 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 3 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Figure 4 for MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven Reinforcement Learning
Viaarxiv icon

Weighted Gaussian Process Bandits for Non-stationary Environments

Add code
Bookmark button
Alert button
Jul 06, 2021
Yuntian Deng, Xingyu Zhou, Baekjin Kim, Ambuj Tewari, Abhishek Gupta, Ness Shroff

Figure 1 for Weighted Gaussian Process Bandits for Non-stationary Environments
Figure 2 for Weighted Gaussian Process Bandits for Non-stationary Environments
Figure 3 for Weighted Gaussian Process Bandits for Non-stationary Environments
Figure 4 for Weighted Gaussian Process Bandits for Non-stationary Environments
Viaarxiv icon

Which Mutual-Information Representation Learning Objectives are Sufficient for Control?

Add code
Bookmark button
Alert button
Jun 14, 2021
Kate Rakelly, Abhishek Gupta, Carlos Florensa, Sergey Levine

Figure 1 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 2 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 3 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Figure 4 for Which Mutual-Information Representation Learning Objectives are Sufficient for Control?
Viaarxiv icon

Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles

Add code
Bookmark button
Alert button
May 25, 2021
Zhaoxuan Zhu, Nicola Pivaro, Shobhit Gupta, Abhishek Gupta, Marcello Canova

Figure 1 for Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles
Figure 2 for Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles
Figure 3 for Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles
Figure 4 for Safe Model-based Off-policy Reinforcement Learning for Eco-Driving in Connected and Automated Hybrid Electric Vehicles
Viaarxiv icon