Picture for Rohan Chitnis

Rohan Chitnis

Sequential Decision-Making for Inline Text Autocomplete

Mar 21, 2024
Viaarxiv icon

Score Models for Offline Goal-Conditioned Reinforcement Learning

Nov 03, 2023
Viaarxiv icon

IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control

Add code
Jun 01, 2023
Figure 1 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 2 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 3 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Figure 4 for IQL-TD-MPC: Implicit Q-Learning for Hierarchical Model Predictive Control
Viaarxiv icon

Sequence Modeling is a Robust Contender for Offline Reinforcement Learning

Add code
May 26, 2023
Figure 1 for Sequence Modeling is a Robust Contender for Offline Reinforcement Learning
Figure 2 for Sequence Modeling is a Robust Contender for Offline Reinforcement Learning
Figure 3 for Sequence Modeling is a Robust Contender for Offline Reinforcement Learning
Figure 4 for Sequence Modeling is a Robust Contender for Offline Reinforcement Learning
Viaarxiv icon

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Aug 16, 2022
Figure 1 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 2 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 3 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 4 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Viaarxiv icon

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Mar 17, 2022
Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon

Towards Optimal Correlational Object Search

Oct 19, 2021
Figure 1 for Towards Optimal Correlational Object Search
Figure 2 for Towards Optimal Correlational Object Search
Figure 3 for Towards Optimal Correlational Object Search
Figure 4 for Towards Optimal Correlational Object Search
Viaarxiv icon

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

Sep 30, 2021
Figure 1 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 2 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 3 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 4 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Viaarxiv icon

Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning

May 28, 2021
Figure 1 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Viaarxiv icon

Learning Symbolic Operators for Task and Motion Planning

Add code
Feb 28, 2021
Figure 1 for Learning Symbolic Operators for Task and Motion Planning
Figure 2 for Learning Symbolic Operators for Task and Motion Planning
Figure 3 for Learning Symbolic Operators for Task and Motion Planning
Figure 4 for Learning Symbolic Operators for Task and Motion Planning
Viaarxiv icon