Alert button
Picture for Ruslan Salakhutdinov

Ruslan Salakhutdinov

Alert button

Concurrent Meta Reinforcement Learning

Mar 07, 2019
Emilio Parisotto, Soham Ghosh, Sai Bhargav Yalamanchi, Varsha Chinnaobireddy, Yuhuai Wu, Ruslan Salakhutdinov

Figure 1 for Concurrent Meta Reinforcement Learning
Figure 2 for Concurrent Meta Reinforcement Learning
Figure 3 for Concurrent Meta Reinforcement Learning
Figure 4 for Concurrent Meta Reinforcement Learning
Viaarxiv icon

The Omniglot Challenge: A 3-Year Progress Report

Feb 09, 2019
Brenden M. Lake, Ruslan Salakhutdinov, Joshua B. Tenenbaum

Figure 1 for The Omniglot Challenge: A 3-Year Progress Report
Figure 2 for The Omniglot Challenge: A 3-Year Progress Report
Figure 3 for The Omniglot Challenge: A 3-Year Progress Report
Viaarxiv icon

Embodied Multimodal Multitask Learning

Feb 04, 2019
Devendra Singh Chaplot, Lisa Lee, Ruslan Salakhutdinov, Devi Parikh, Dhruv Batra

Figure 1 for Embodied Multimodal Multitask Learning
Figure 2 for Embodied Multimodal Multitask Learning
Figure 3 for Embodied Multimodal Multitask Learning
Figure 4 for Embodied Multimodal Multitask Learning
Viaarxiv icon

Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context

Jan 18, 2019
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov

Figure 1 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 2 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 3 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Figure 4 for Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Viaarxiv icon

Connecting the Dots Between MLE and RL for Sequence Generation

Nov 24, 2018
Bowen Tan, Zhiting Hu, Zichao Yang, Ruslan Salakhutdinov, Eric Xing

Figure 1 for Connecting the Dots Between MLE and RL for Sequence Generation
Figure 2 for Connecting the Dots Between MLE and RL for Sequence Generation
Figure 3 for Connecting the Dots Between MLE and RL for Sequence Generation
Figure 4 for Connecting the Dots Between MLE and RL for Sequence Generation
Viaarxiv icon

Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures

Nov 19, 2018
Hongyang Zhang, Susu Xu, Jiantao Jiao, Pengtao Xie, Ruslan Salakhutdinov, Eric P. Xing

Figure 1 for Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
Figure 2 for Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
Figure 3 for Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
Figure 4 for Stackelberg GAN: Towards Provable Minimax Equilibrium via Multi-Generator Architectures
Viaarxiv icon

On the Complexity of Exploration in Goal-Driven Navigation

Nov 16, 2018
Maruan Al-Shedivat, Lisa Lee, Ruslan Salakhutdinov, Eric Xing

Figure 1 for On the Complexity of Exploration in Goal-Driven Navigation
Viaarxiv icon

Transformation Autoregressive Networks

Oct 23, 2018
Junier B. Oliva, Avinava Dubey, Manzil Zaheer, Barnabás Póczos, Ruslan Salakhutdinov, Eric P. Xing, Jeff Schneider

Figure 1 for Transformation Autoregressive Networks
Figure 2 for Transformation Autoregressive Networks
Figure 3 for Transformation Autoregressive Networks
Figure 4 for Transformation Autoregressive Networks
Viaarxiv icon

Point Cloud GAN

Oct 13, 2018
Chun-Liang Li, Manzil Zaheer, Yang Zhang, Barnabas Poczos, Ruslan Salakhutdinov

Figure 1 for Point Cloud GAN
Figure 2 for Point Cloud GAN
Figure 3 for Point Cloud GAN
Figure 4 for Point Cloud GAN
Viaarxiv icon

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

Sep 25, 2018
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, Christopher D. Manning

Figure 1 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 2 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 3 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Figure 4 for HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Viaarxiv icon