Alert button
Picture for Ruslan Salakhutdinov

Ruslan Salakhutdinov

Alert button

Conditional Contrastive Learning with Kernel

Add code
Bookmark button
Alert button
Feb 14, 2022
Yao-Hung Hubert Tsai, Tianqin Li, Martin Q. Ma, Han Zhao, Kun Zhang, Louis-Philippe Morency, Ruslan Salakhutdinov

Figure 1 for Conditional Contrastive Learning with Kernel
Figure 2 for Conditional Contrastive Learning with Kernel
Figure 3 for Conditional Contrastive Learning with Kernel
Figure 4 for Conditional Contrastive Learning with Kernel
Viaarxiv icon

Learning Weakly-Supervised Contrastive Representations

Add code
Bookmark button
Alert button
Feb 14, 2022
Yao-Hung Hubert Tsai, Tianqin Li, Weixin Liu, Peiyuan Liao, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Learning Weakly-Supervised Contrastive Representations
Figure 2 for Learning Weakly-Supervised Contrastive Representations
Figure 3 for Learning Weakly-Supervised Contrastive Representations
Figure 4 for Learning Weakly-Supervised Contrastive Representations
Viaarxiv icon

SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency

Add code
Bookmark button
Alert button
Dec 02, 2021
Devendra Singh Chaplot, Murtaza Dalal, Saurabh Gupta, Jitendra Malik, Ruslan Salakhutdinov

Figure 1 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 2 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 3 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Figure 4 for SEAL: Self-supervised Embodied Active Learning using Exploration and 3D Consistency
Viaarxiv icon

Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives

Add code
Bookmark button
Alert button
Oct 28, 2021
Murtaza Dalal, Deepak Pathak, Ruslan Salakhutdinov

Figure 1 for Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Figure 2 for Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Figure 3 for Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Figure 4 for Accelerating Robotic Reinforcement Learning via Parameterized Action Primitives
Viaarxiv icon

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

Add code
Bookmark button
Alert button
Oct 22, 2021
Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez

Figure 1 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 2 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 3 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 4 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Viaarxiv icon

FILM: Following Instructions in Language with Modular Methods

Add code
Bookmark button
Alert button
Oct 18, 2021
So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, Ruslan Salakhutdinov

Figure 1 for FILM: Following Instructions in Language with Modular Methods
Figure 2 for FILM: Following Instructions in Language with Modular Methods
Figure 3 for FILM: Following Instructions in Language with Modular Methods
Figure 4 for FILM: Following Instructions in Language with Modular Methods
Viaarxiv icon

ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers

Add code
Bookmark button
Alert button
Oct 13, 2021
Haitian Sun, William W. Cohen, Ruslan Salakhutdinov

Figure 1 for ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers
Figure 2 for ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers
Figure 3 for ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers
Figure 4 for ConditionalQA: A Complex Reading Comprehension Dataset with Conditional Answers
Viaarxiv icon

Recurrent Model-Free RL is a Strong Baseline for Many POMDPs

Add code
Bookmark button
Alert button
Oct 11, 2021
Tianwei Ni, Benjamin Eysenbach, Ruslan Salakhutdinov

Figure 1 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 2 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 3 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 4 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Viaarxiv icon

Mismatched No More: Joint Model-Policy Optimization for Model-Based RL

Add code
Bookmark button
Alert button
Oct 06, 2021
Benjamin Eysenbach, Alexander Khazatsky, Sergey Levine, Ruslan Salakhutdinov

Figure 1 for Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
Figure 2 for Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
Figure 3 for Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
Figure 4 for Mismatched No More: Joint Model-Policy Optimization for Model-Based RL
Viaarxiv icon

The Information Geometry of Unsupervised Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 06, 2021
Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for The Information Geometry of Unsupervised Reinforcement Learning
Figure 2 for The Information Geometry of Unsupervised Reinforcement Learning
Figure 3 for The Information Geometry of Unsupervised Reinforcement Learning
Figure 4 for The Information Geometry of Unsupervised Reinforcement Learning
Viaarxiv icon