Picture for Aravind Srinivas

Aravind Srinivas

Decision Transformer: Reinforcement Learning via Sequence Modeling

Add code
Jun 24, 2021
Figure 1 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 2 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 3 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Figure 4 for Decision Transformer: Reinforcement Learning via Sequence Modeling
Viaarxiv icon

VideoGPT: Video Generation using VQ-VAE and Transformers

Add code
Apr 20, 2021
Figure 1 for VideoGPT: Video Generation using VQ-VAE and Transformers
Figure 2 for VideoGPT: Video Generation using VQ-VAE and Transformers
Figure 3 for VideoGPT: Video Generation using VQ-VAE and Transformers
Figure 4 for VideoGPT: Video Generation using VQ-VAE and Transformers
Viaarxiv icon

Scaling Local Self-Attention for Parameter Efficient Visual Backbones

Add code
Mar 30, 2021
Figure 1 for Scaling Local Self-Attention for Parameter Efficient Visual Backbones
Figure 2 for Scaling Local Self-Attention for Parameter Efficient Visual Backbones
Figure 3 for Scaling Local Self-Attention for Parameter Efficient Visual Backbones
Figure 4 for Scaling Local Self-Attention for Parameter Efficient Visual Backbones
Viaarxiv icon

Revisiting ResNets: Improved Training and Scaling Strategies

Add code
Mar 13, 2021
Figure 1 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 2 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 3 for Revisiting ResNets: Improved Training and Scaling Strategies
Figure 4 for Revisiting ResNets: Improved Training and Scaling Strategies
Viaarxiv icon

Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings

Add code
Mar 04, 2021
Figure 1 for Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Figure 2 for Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Figure 3 for Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Figure 4 for Improving Computational Efficiency in Visual Reinforcement Learning via Stored Embeddings
Viaarxiv icon

Bottleneck Transformers for Visual Recognition

Add code
Jan 27, 2021
Figure 1 for Bottleneck Transformers for Visual Recognition
Figure 2 for Bottleneck Transformers for Visual Recognition
Figure 3 for Bottleneck Transformers for Visual Recognition
Figure 4 for Bottleneck Transformers for Visual Recognition
Viaarxiv icon

Reinforcement Learning with Latent Flow

Add code
Jan 06, 2021
Figure 1 for Reinforcement Learning with Latent Flow
Figure 2 for Reinforcement Learning with Latent Flow
Figure 3 for Reinforcement Learning with Latent Flow
Figure 4 for Reinforcement Learning with Latent Flow
Viaarxiv icon

Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation

Add code
Dec 13, 2020
Figure 1 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 2 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 3 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Figure 4 for Simple Copy-Paste is a Strong Data Augmentation Method for Instance Segmentation
Viaarxiv icon

D2RL: Deep Dense Architectures in Reinforcement Learning

Add code
Oct 19, 2020
Figure 1 for D2RL: Deep Dense Architectures in Reinforcement Learning
Figure 2 for D2RL: Deep Dense Architectures in Reinforcement Learning
Figure 3 for D2RL: Deep Dense Architectures in Reinforcement Learning
Figure 4 for D2RL: Deep Dense Architectures in Reinforcement Learning
Viaarxiv icon

Evaluating Self-Supervised Pretraining Without Using Labels

Add code
Sep 16, 2020
Figure 1 for Evaluating Self-Supervised Pretraining Without Using Labels
Figure 2 for Evaluating Self-Supervised Pretraining Without Using Labels
Figure 3 for Evaluating Self-Supervised Pretraining Without Using Labels
Figure 4 for Evaluating Self-Supervised Pretraining Without Using Labels
Viaarxiv icon