Alert button
Picture for Rishabh Agarwal

Rishabh Agarwal

Alert button

Google Research Brain Team

Bigger, Better, Faster: Human-level Atari with human-level efficiency

Add code
Bookmark button
Alert button
Jun 09, 2023
Max Schwarzer, Johan Obando-Ceron, Aaron Courville, Marc Bellemare, Rishabh Agarwal, Pablo Samuel Castro

Figure 1 for Bigger, Better, Faster: Human-level Atari with human-level efficiency
Figure 2 for Bigger, Better, Faster: Human-level Atari with human-level efficiency
Figure 3 for Bigger, Better, Faster: Human-level Atari with human-level efficiency
Figure 4 for Bigger, Better, Faster: Human-level Atari with human-level efficiency
Viaarxiv icon

Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks

Add code
Bookmark button
Alert button
Apr 25, 2023
Jesse Farebrother, Joshua Greaves, Rishabh Agarwal, Charline Le Lan, Ross Goroshin, Pablo Samuel Castro, Marc G. Bellemare

Figure 1 for Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Figure 2 for Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Figure 3 for Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Figure 4 for Proto-Value Networks: Scaling Representation Learning with Auxiliary Tasks
Viaarxiv icon

The Dormant Neuron Phenomenon in Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 24, 2023
Ghada Sokar, Rishabh Agarwal, Pablo Samuel Castro, Utku Evci

Figure 1 for The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Figure 2 for The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Figure 3 for The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Figure 4 for The Dormant Neuron Phenomenon in Deep Reinforcement Learning
Viaarxiv icon

Revisiting Bellman Errors for Offline Model Selection

Add code
Bookmark button
Alert button
Jan 31, 2023
Joshua P. Zitovsky, Daniel de Marchi, Rishabh Agarwal, Michael R. Kosorok

Figure 1 for Revisiting Bellman Errors for Offline Model Selection
Figure 2 for Revisiting Bellman Errors for Offline Model Selection
Figure 3 for Revisiting Bellman Errors for Offline Model Selection
Figure 4 for Revisiting Bellman Errors for Offline Model Selection
Viaarxiv icon

A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces

Add code
Bookmark button
Alert button
Dec 08, 2022
Charline Le Lan, Joshua Greaves, Jesse Farebrother, Mark Rowland, Fabian Pedregosa, Rishabh Agarwal, Marc G. Bellemare

Figure 1 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 2 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 3 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Figure 4 for A Novel Stochastic Gradient Descent Algorithm for Learning Principal Subspaces
Viaarxiv icon

Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes

Add code
Bookmark button
Alert button
Nov 28, 2022
Aviral Kumar, Rishabh Agarwal, Xinyang Geng, George Tucker, Sergey Levine

Figure 1 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 2 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 3 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Figure 4 for Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes
Viaarxiv icon

Beyond Tabula Rasa: Reincarnating Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 03, 2022
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare

Figure 1 for Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Figure 2 for Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Figure 3 for Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Figure 4 for Beyond Tabula Rasa: Reincarnating Reinforcement Learning
Viaarxiv icon

On the Generalization of Representations in Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 01, 2022
Charline Le Lan, Stephen Tu, Adam Oberman, Rishabh Agarwal, Marc G. Bellemare

Figure 1 for On the Generalization of Representations in Reinforcement Learning
Figure 2 for On the Generalization of Representations in Reinforcement Learning
Figure 3 for On the Generalization of Representations in Reinforcement Learning
Figure 4 for On the Generalization of Representations in Reinforcement Learning
Viaarxiv icon

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

Add code
Bookmark button
Alert button
Dec 09, 2021
Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Figure 1 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 2 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 3 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 4 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Viaarxiv icon

Deep Reinforcement Learning at the Edge of the Statistical Precipice

Add code
Bookmark button
Alert button
Aug 30, 2021
Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron Courville, Marc G. Bellemare

Figure 1 for Deep Reinforcement Learning at the Edge of the Statistical Precipice
Figure 2 for Deep Reinforcement Learning at the Edge of the Statistical Precipice
Figure 3 for Deep Reinforcement Learning at the Edge of the Statistical Precipice
Figure 4 for Deep Reinforcement Learning at the Edge of the Statistical Precipice
Viaarxiv icon