Alert button
Picture for Anurag Ajay

Anurag Ajay

Alert button

Compositional Foundation Models for Hierarchical Planning

Sep 21, 2023
Anurag Ajay, Seungwook Han, Yilun Du, Shuang Li, Abhi Gupta, Tommi Jaakkola, Josh Tenenbaum, Leslie Kaelbling, Akash Srivastava, Pulkit Agrawal

Figure 1 for Compositional Foundation Models for Hierarchical Planning
Figure 2 for Compositional Foundation Models for Hierarchical Planning
Figure 3 for Compositional Foundation Models for Hierarchical Planning
Figure 4 for Compositional Foundation Models for Hierarchical Planning
Viaarxiv icon

Parallel $Q$-Learning: Scaling Off-policy Reinforcement Learning under Massively Parallel Simulation

Jul 24, 2023
Zechu Li, Tao Chen, Zhang-Wei Hong, Anurag Ajay, Pulkit Agrawal

Viaarxiv icon

Statistical Learning under Heterogenous Distribution Shift

Feb 27, 2023
Max Simchowitz, Anurag Ajay, Pulkit Agrawal, Akshay Krishnamurthy

Figure 1 for Statistical Learning under Heterogenous Distribution Shift
Figure 2 for Statistical Learning under Heterogenous Distribution Shift
Figure 3 for Statistical Learning under Heterogenous Distribution Shift
Figure 4 for Statistical Learning under Heterogenous Distribution Shift
Viaarxiv icon

Is Conditional Generative Modeling all you need for Decision-Making?

Dec 07, 2022
Anurag Ajay, Yilun Du, Abhi Gupta, Joshua Tenenbaum, Tommi Jaakkola, Pulkit Agrawal

Figure 1 for Is Conditional Generative Modeling all you need for Decision-Making?
Figure 2 for Is Conditional Generative Modeling all you need for Decision-Making?
Figure 3 for Is Conditional Generative Modeling all you need for Decision-Making?
Figure 4 for Is Conditional Generative Modeling all you need for Decision-Making?
Viaarxiv icon

Distributionally Adaptive Meta Reinforcement Learning

Oct 06, 2022
Anurag Ajay, Abhishek Gupta, Dibya Ghosh, Sergey Levine, Pulkit Agrawal

Figure 1 for Distributionally Adaptive Meta Reinforcement Learning
Figure 2 for Distributionally Adaptive Meta Reinforcement Learning
Figure 3 for Distributionally Adaptive Meta Reinforcement Learning
Figure 4 for Distributionally Adaptive Meta Reinforcement Learning
Viaarxiv icon

Offline RL Policies Should be Trained to be Adaptive

Jul 05, 2022
Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine

Figure 1 for Offline RL Policies Should be Trained to be Adaptive
Figure 2 for Offline RL Policies Should be Trained to be Adaptive
Figure 3 for Offline RL Policies Should be Trained to be Adaptive
Figure 4 for Offline RL Policies Should be Trained to be Adaptive
Viaarxiv icon

Overcoming the Spectral Bias of Neural Value Approximation

Jun 09, 2022
Ge Yang, Anurag Ajay, Pulkit Agrawal

Figure 1 for Overcoming the Spectral Bias of Neural Value Approximation
Figure 2 for Overcoming the Spectral Bias of Neural Value Approximation
Figure 3 for Overcoming the Spectral Bias of Neural Value Approximation
Figure 4 for Overcoming the Spectral Bias of Neural Value Approximation
Viaarxiv icon

OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning

Oct 27, 2020
Anurag Ajay, Aviral Kumar, Pulkit Agrawal, Sergey Levine, Ofir Nachum

Figure 1 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 2 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 3 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Figure 4 for OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning
Viaarxiv icon