Alert button
Picture for Tom Silver

Tom Silver

Alert button

Practice Makes Perfect: Planning to Learn Skill Parameter Policies

Add code
Bookmark button
Alert button
Feb 22, 2024
Nishanth Kumar, Tom Silver, Willie McClinton, Linfeng Zhao, Stephen Proulx, Tomás Lozano-Pérez, Leslie Pack Kaelbling, Jennifer Barry

Figure 1 for Practice Makes Perfect: Planning to Learn Skill Parameter Policies
Figure 2 for Practice Makes Perfect: Planning to Learn Skill Parameter Policies
Figure 3 for Practice Makes Perfect: Planning to Learn Skill Parameter Policies
Figure 4 for Practice Makes Perfect: Planning to Learn Skill Parameter Policies
Viaarxiv icon

Generalized Planning in PDDL Domains with Pretrained Large Language Models

Add code
Bookmark button
Alert button
May 18, 2023
Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B. Tenenbaum, Leslie Pack Kaelbling, Michael Katz

Figure 1 for Generalized Planning in PDDL Domains with Pretrained Large Language Models
Figure 2 for Generalized Planning in PDDL Domains with Pretrained Large Language Models
Figure 3 for Generalized Planning in PDDL Domains with Pretrained Large Language Models
Figure 4 for Generalized Planning in PDDL Domains with Pretrained Large Language Models
Viaarxiv icon

Embodied Active Learning of Relational State Abstractions for Bilevel Planning

Add code
Bookmark button
Alert button
Mar 08, 2023
Amber Li, Tom Silver

Figure 1 for Embodied Active Learning of Relational State Abstractions for Bilevel Planning
Figure 2 for Embodied Active Learning of Relational State Abstractions for Bilevel Planning
Figure 3 for Embodied Active Learning of Relational State Abstractions for Bilevel Planning
Figure 4 for Embodied Active Learning of Relational State Abstractions for Bilevel Planning
Viaarxiv icon

Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains

Add code
Bookmark button
Alert button
Aug 16, 2022
Nishanth Kumar, Willie McClinton, Rohan Chitnis, Tom Silver, Tomás Lozano-Pérez, Leslie Pack Kaelbling

Figure 1 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 2 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 3 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Figure 4 for Learning Operators with Ignore Effects for Bilevel Planning in Continuous Domains
Viaarxiv icon

Learning Neuro-Symbolic Skills for Bilevel Planning

Add code
Bookmark button
Alert button
Jun 21, 2022
Tom Silver, Ashay Athalye, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Skills for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Skills for Bilevel Planning
Viaarxiv icon

PG3: Policy-Guided Planning for Generalized Policy Generation

Add code
Bookmark button
Alert button
Apr 21, 2022
Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 2 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 3 for PG3: Policy-Guided Planning for Generalized Policy Generation
Figure 4 for PG3: Policy-Guided Planning for Generalized Policy Generation
Viaarxiv icon

Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning

Add code
Bookmark button
Alert button
Mar 17, 2022
Tom Silver, Rohan Chitnis, Nishanth Kumar, Willie McClinton, Tomas Lozano-Perez, Leslie Pack Kaelbling, Joshua Tenenbaum

Figure 1 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 2 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 3 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Figure 4 for Inventing Relational State and Action Abstractions for Effective and Efficient Bilevel Planning
Viaarxiv icon

Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators

Add code
Bookmark button
Alert button
Sep 30, 2021
Clement Gehring, Masataro Asai, Rohan Chitnis, Tom Silver, Leslie Pack Kaelbling, Shirin Sohrabi, Michael Katz

Figure 1 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 2 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 3 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Figure 4 for Reinforcement Learning for Classical Planning: Viewing Heuristics as Dense Reward Generators
Viaarxiv icon

Discovering State and Action Abstractions for Generalized Task and Motion Planning

Add code
Bookmark button
Alert button
Sep 23, 2021
Aidan Curtis, Tom Silver, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 2 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 3 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Figure 4 for Discovering State and Action Abstractions for Generalized Task and Motion Planning
Viaarxiv icon

Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning

Add code
Bookmark button
Alert button
May 28, 2021
Rohan Chitnis, Tom Silver, Joshua B. Tenenbaum, Tomas Lozano-Perez, Leslie Pack Kaelbling

Figure 1 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 2 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 3 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Figure 4 for Learning Neuro-Symbolic Relational Transition Models for Bilevel Planning
Viaarxiv icon