Alert button
Picture for Ashvin Nair

Ashvin Nair

Alert button

Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision

Add code
Bookmark button
Alert button
Oct 27, 2022
Ashvin Nair, Brian Zhu, Gokul Narayanan, Eugen Solowjow, Sergey Levine

Figure 1 for Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Figure 2 for Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Figure 3 for Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Figure 4 for Learning on the Job: Self-Rewarding Offline-to-Online Finetuning for Industrial Insertion of Novel Connectors from Vision
Viaarxiv icon

Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks

Add code
Bookmark button
Alert button
Oct 12, 2022
Kuan Fang, Patrick Yin, Ashvin Nair, Homer Walke, Gengchen Yan, Sergey Levine

Figure 1 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 2 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 3 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Figure 4 for Generalization with Lossy Affordances: Leveraging Broad Offline Data for Learning Visuomotor Tasks
Viaarxiv icon

Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space

Add code
Bookmark button
Alert button
May 17, 2022
Kuan Fang, Patrick Yin, Ashvin Nair, Sergey Levine

Figure 1 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 2 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 3 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Figure 4 for Planning to Practice: Efficient Online Fine-Tuning by Composing Goals in Latent Space
Viaarxiv icon

Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 28, 2022
Philippe Hansen-Estruch, Amy Zhang, Ashvin Nair, Patrick Yin, Sergey Levine

Figure 1 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 2 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 3 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Figure 4 for Bisimulation Makes Analogies in Goal-Conditioned Reinforcement Learning
Viaarxiv icon

Offline Reinforcement Learning with Implicit Q-Learning

Add code
Bookmark button
Alert button
Oct 12, 2021
Ilya Kostrikov, Ashvin Nair, Sergey Levine

Figure 1 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 2 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 3 for Offline Reinforcement Learning with Implicit Q-Learning
Figure 4 for Offline Reinforcement Learning with Implicit Q-Learning
Viaarxiv icon

Offline Meta-Reinforcement Learning with Online Self-Supervision

Add code
Bookmark button
Alert button
Jul 19, 2021
Vitchyr H. Pong, Ashvin Nair, Laura Smith, Catherine Huang, Sergey Levine

Figure 1 for Offline Meta-Reinforcement Learning with Online Self-Supervision
Figure 2 for Offline Meta-Reinforcement Learning with Online Self-Supervision
Figure 3 for Offline Meta-Reinforcement Learning with Online Self-Supervision
Figure 4 for Offline Meta-Reinforcement Learning with Online Self-Supervision
Viaarxiv icon

What Can I Do Here? Learning New Skills by Imagining Visual Affordances

Add code
Bookmark button
Alert button
Jun 13, 2021
Alexander Khazatsky, Ashvin Nair, Daniel Jing, Sergey Levine

Figure 1 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 2 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 3 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Figure 4 for What Can I Do Here? Learning New Skills by Imagining Visual Affordances
Viaarxiv icon

DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies

Add code
Bookmark button
Alert button
Apr 23, 2021
Soroush Nasiriany, Vitchyr H. Pong, Ashvin Nair, Alexander Khazatsky, Glen Berseth, Sergey Levine

Figure 1 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 2 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 3 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Figure 4 for DisCo RL: Distribution-Conditioned Reinforcement Learning for General-Purpose Policies
Viaarxiv icon