Picture for Ruslan Salakhutdinov

Ruslan Salakhutdinov

Shammie

Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents

Add code
May 07, 2023
Viaarxiv icon

Quantifying & Modeling Feature Interactions: An Information Decomposition Framework

Add code
Feb 23, 2023
Viaarxiv icon

Effective Data Augmentation With Diffusion Models

Add code
Feb 07, 2023
Viaarxiv icon

Grounding Language Models to Images for Multimodal Generation

Add code
Jan 31, 2023
Figure 1 for Grounding Language Models to Images for Multimodal Generation
Figure 2 for Grounding Language Models to Images for Multimodal Generation
Figure 3 for Grounding Language Models to Images for Multimodal Generation
Figure 4 for Grounding Language Models to Images for Multimodal Generation
Viaarxiv icon

Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment

Add code
Dec 20, 2022
Figure 1 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 2 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 3 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Figure 4 for Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
Viaarxiv icon

Object Goal Navigation with End-to-End Self-Supervision

Add code
Dec 09, 2022
Figure 1 for Object Goal Navigation with End-to-End Self-Supervision
Figure 2 for Object Goal Navigation with End-to-End Self-Supervision
Figure 3 for Object Goal Navigation with End-to-End Self-Supervision
Figure 4 for Object Goal Navigation with End-to-End Self-Supervision
Viaarxiv icon

Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control

Add code
Nov 10, 2022
Figure 1 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 2 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 3 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Figure 4 for Nano: Nested Human-in-the-Loop Reward Learning for Few-shot Language Model Control
Viaarxiv icon

Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue

Add code
Oct 11, 2022
Figure 1 for Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue
Figure 2 for Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue
Figure 3 for Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue
Figure 4 for Don't Copy the Teacher: Data and Model Challenges in Embodied Dialogue
Viaarxiv icon

Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis

Add code
Oct 10, 2022
Figure 1 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 2 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 3 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Figure 4 for Uncertainty Quantification with Pre-trained Language Models: A Large-Scale Empirical Analysis
Viaarxiv icon

Paraphrasing Is All You Need for Novel Object Captioning

Add code
Sep 25, 2022
Figure 1 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 2 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 3 for Paraphrasing Is All You Need for Novel Object Captioning
Figure 4 for Paraphrasing Is All You Need for Novel Object Captioning
Viaarxiv icon