Alert button
Picture for Leslie Pack Kaelbling

Leslie Pack Kaelbling

Alert button

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Add code
Bookmark button
Alert button
Mar 17, 2022
Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon

Representation, learning, and planning algorithms for geometric task and motion planning

Add code
Bookmark button
Alert button
Mar 09, 2022
Beomjoon Kim, Luke Shimanuki, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Figure 1 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 2 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 3 for Representation, learning, and planning algorithms for geometric task and motion planning
Figure 4 for Representation, learning, and planning algorithms for geometric task and motion planning
Viaarxiv icon

Specifying and achieving goals in open uncertain robot-manipulation domains

Add code
Bookmark button
Alert button
Dec 21, 2021
Leslie Pack Kaelbling, Alex LaGrassa, Tomás Lozano-Pérez

Figure 1 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 2 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 3 for Specifying and achieving goals in open uncertain robot-manipulation domains
Figure 4 for Specifying and achieving goals in open uncertain robot-manipulation domains
Viaarxiv icon

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

Add code
Bookmark button
Alert button
Sep 30, 2021
Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

Figure 1 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 2 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 3 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 4 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Viaarxiv icon

Discovering State and Action Abstractions for Generalized Task and Motion Planning

Add code
Bookmark button
Alert button
Sep 23, 2021
Aidan Curtis, Tom Silver, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 2 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 3 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 4 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Viaarxiv icon

Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances

Add code
Bookmark button
Alert button
Aug 10, 2021
Aidan Curtis, Xiaolin Fang, Leslie Pack Kaelbling, Tomás Lozano-Pérez, Caelan Reed Garrett

Figure 1 for Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances
Figure 2 for Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances
Figure 3 for Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances
Figure 4 for Long-Horizon Manipulation of Unknown Objects via Task and Motion Planning with Estimated Affordances
Viaarxiv icon

Temporal and Object Quantification Networks

Add code
Bookmark button
Alert button
Jun 10, 2021
Jiayuan Mao, Zhezheng Luo, Chuang Gan, Joshua B. Tenenbaum, Jiajun Wu, Leslie Pack Kaelbling, Tomer D. Ullman

Figure 1 for Temporal and Object Quantification Networks
Figure 2 for Temporal and Object Quantification Networks
Figure 3 for Temporal and Object Quantification Networks
Figure 4 for Temporal and Object Quantification Networks
Viaarxiv icon

Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning

Add code
Bookmark button
Alert button
May 28, 2021
Rohan Chitnis, Tom Silver, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Viaarxiv icon

Learning When to Quit: Meta-Reasoning for Motion Planning

Add code
Bookmark button
Alert button
Mar 07, 2021
Yoonchang Sung, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Figure 1 for Learning When to Quit: Meta-Reasoning for Motion Planning
Figure 2 for Learning When to Quit: Meta-Reasoning for Motion Planning
Figure 3 for Learning When to Quit: Meta-Reasoning for Motion Planning
Figure 4 for Learning When to Quit: Meta-Reasoning for Motion Planning
Viaarxiv icon