Picture for Leslie Pack Kaelbling

Leslie Pack Kaelbling

SE(3)-Equivariant Relational Rearrangement with Neural Descriptor Fields

Add code
Nov 17, 2022
Viaarxiv icon

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Add code
Aug 16, 2022
Figure 1 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 2 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 3 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 4 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Viaarxiv icon

Learning Neuro-Symbolic Skills for Bilevel Planning

Add code
Jun 21, 2022
Figure 1 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Skills for Bilevel Planning
Viaarxiv icon

Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications

Add code
Jun 06, 2022
Figure 1 for Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications
Figure 2 for Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications
Figure 3 for Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications
Figure 4 for Fully Persistent Spatial Data Structures for Efficient Queries in Path-Dependent Motion Planning Applications
Viaarxiv icon

PG3: Policy-Guided Planning for Generalized Policy Generation

Add code
Apr 21, 2022
Figure 1 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 2 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 3 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 4 for PG3: Policy-Guided Planning for Generalized Policy Generation
Viaarxiv icon

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Add code
Mar 17, 2022
Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon

Representation, learning, and planning algorithms for geometric task and motion planning

Add code
Mar 09, 2022
Figure 1 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 2 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 3 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 4 for Representation, learning, and planning algorithms for geometric task and motion planning
Viaarxiv icon

Specifying and achieving goals in open uncertain robot-manipulation domains

Add code
Dec 21, 2021
Figure 1 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 2 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 3 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 4 for Specifying and achieving goals in open uncertain robot-manipulation domains
Viaarxiv icon

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

Add code
Sep 30, 2021
Figure 1 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 2 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 3 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 4 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Viaarxiv icon

Discovering State and Action Abstractions for Generalized Task and Motion Planning

Add code
Sep 23, 2021
Figure 1 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 2 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 3 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 4 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Viaarxiv icon