Alert button
Picture for Ruslan Salakhutdinov

Ruslan Salakhutdinov

Alert button

Stabilizing Contrastive RL: Techniques for Offline Goal Reaching

Add code
Bookmark button
Alert button
Jun 06, 2023
Chongyi Zheng, Benjamin Eysenbach, Homer Walke, Patrick Yin, Kuan Fang, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 2 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 3 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Figure 4 for Stabilizing Contrastive RL: Techniques for Offline Goal Reaching
Viaarxiv icon

Generating Images with Multimodal Language Models

Add code
Bookmark button
Alert button
May 26, 2023
Jing Yu Koh, Daniel Fried, Ruslan Salakhutdinov

Figure 1 for Generating Images with Multimodal Language Models
Figure 2 for Generating Images with Multimodal Language Models
Figure 3 for Generating Images with Multimodal Language Models
Figure 4 for Generating Images with Multimodal Language Models
Viaarxiv icon

Imitating Task and Motion Planning with Visuomotor Transformers

Add code
Bookmark button
Alert button
May 25, 2023
Murtaza Dalal, Ajay Mandlekar, Caelan Garrett, Ankur Handa, Ruslan Salakhutdinov, Dieter Fox

Figure 1 for Imitating Task and Motion Planning with Visuomotor Transformers
Figure 2 for Imitating Task and Motion Planning with Visuomotor Transformers
Figure 3 for Imitating Task and Motion Planning with Visuomotor Transformers
Figure 4 for Imitating Task and Motion Planning with Visuomotor Transformers
Viaarxiv icon

SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning

Add code
Bookmark button
Alert button
May 24, 2023
Yue Wu, So Yeon Min, Shrimai Prabhumoye, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li

Figure 1 for SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning
Figure 2 for SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning
Figure 3 for SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning
Figure 4 for SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning
Viaarxiv icon

Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents

Add code
Bookmark button
Alert button
May 07, 2023
Yue Wu, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Yuanzhi Li, Tom Mitchell, Shrimai Prabhumoye

Figure 1 for Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
Figure 2 for Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
Figure 3 for Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
Figure 4 for Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents
Viaarxiv icon

Quantifying & Modeling Feature Interactions: An Information Decomposition Framework

Add code
Bookmark button
Alert button
Feb 23, 2023
Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 2 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 3 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Figure 4 for Quantifying & Modeling Feature Interactions: An Information Decomposition Framework
Viaarxiv icon

Effective Data Augmentation With Diffusion Models

Add code
Bookmark button
Alert button
Feb 07, 2023
Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov

Figure 1 for Effective Data Augmentation With Diffusion Models
Figure 2 for Effective Data Augmentation With Diffusion Models
Figure 3 for Effective Data Augmentation With Diffusion Models
Figure 4 for Effective Data Augmentation With Diffusion Models
Viaarxiv icon

Grounding Language Models to Images for Multimodal Generation

Add code
Bookmark button
Alert button
Jan 31, 2023
Jing Yu Koh, Ruslan Salakhutdinov, Daniel Fried

Figure 1 for Grounding Language Models to Images for Multimodal Generation
Figure 2 for Grounding Language Models to Images for Multimodal Generation
Figure 3 for Grounding Language Models to Images for Multimodal Generation
Figure 4 for Grounding Language Models to Images for Multimodal Generation
Viaarxiv icon

Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment

Add code
Bookmark button
Alert button
Dec 20, 2022
Rohan Pandey, Rulin Shao, Paul Pu Liang, Ruslan Salakhutdinov, Louis-Philippe Morency

Figure 1 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 2 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 3 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 4 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Viaarxiv icon