Alert button
Picture for Leslie Pack Kaelbling

Leslie Pack Kaelbling

Alert button

Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures

Add code
Bookmark button
Alert button
Dec 02, 2015
Lawson L. S. Wong, Thanard Kurutach, Leslie Pack Kaelbling, Tomás Lozano-Pérez

Figure 1 for Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures
Figure 2 for Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures
Figure 3 for Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures
Figure 4 for Object-based World Modeling in Semi-Static Environments with Dependent Dirichlet-Process Mixtures
Viaarxiv icon

Learning to Cooperate via Policy Search

Add code
Bookmark button
Alert button
Aug 07, 2014
Leonid Peshkin, Kee-Eung Kim, Nicolas Meuleau, Leslie Pack Kaelbling

Figure 1 for Learning to Cooperate via Policy Search
Figure 2 for Learning to Cooperate via Policy Search
Figure 3 for Learning to Cooperate via Policy Search
Figure 4 for Learning to Cooperate via Policy Search
Viaarxiv icon

Deliberation Scheduling for Time-Critical Sequential Decision Making

Add code
Bookmark button
Alert button
Mar 06, 2013
Thomas L. Dean, Leslie Pack Kaelbling, Jak Kirman, Ann Nicholson

Figure 1 for Deliberation Scheduling for Time-Critical Sequential Decision Making
Figure 2 for Deliberation Scheduling for Time-Critical Sequential Decision Making
Figure 3 for Deliberation Scheduling for Time-Critical Sequential Decision Making
Figure 4 for Deliberation Scheduling for Time-Critical Sequential Decision Making
Viaarxiv icon

On the Complexity of Solving Markov Decision Problems

Add code
Bookmark button
Alert button
Feb 20, 2013
Michael L. Littman, Thomas L. Dean, Leslie Pack Kaelbling

Figure 1 for On the Complexity of Solving Markov Decision Problems
Figure 2 for On the Complexity of Solving Markov Decision Problems
Viaarxiv icon

Hierarchical Solution of Markov Decision Processes using Macro-actions

Add code
Bookmark button
Alert button
Jan 30, 2013
Milos Hauskrecht, Nicolas Meuleau, Leslie Pack Kaelbling, Thomas L. Dean, Craig Boutilier

Figure 1 for Hierarchical Solution of Markov Decision Processes using Macro-actions
Figure 2 for Hierarchical Solution of Markov Decision Processes using Macro-actions
Figure 3 for Hierarchical Solution of Markov Decision Processes using Macro-actions
Figure 4 for Hierarchical Solution of Markov Decision Processes using Macro-actions
Viaarxiv icon

Accelerating EM: An Empirical Study

Add code
Bookmark button
Alert button
Jan 23, 2013
Luis E. Ortiz, Leslie Pack Kaelbling

Figure 1 for Accelerating EM: An Empirical Study
Figure 2 for Accelerating EM: An Empirical Study
Figure 3 for Accelerating EM: An Empirical Study
Figure 4 for Accelerating EM: An Empirical Study
Viaarxiv icon

Learning Finite-State Controllers for Partially Observable Environments

Add code
Bookmark button
Alert button
Jan 23, 2013
Nicolas Meuleau, Leonid Peshkin, Kee-Eung Kim, Leslie Pack Kaelbling

Figure 1 for Learning Finite-State Controllers for Partially Observable Environments
Figure 2 for Learning Finite-State Controllers for Partially Observable Environments
Figure 3 for Learning Finite-State Controllers for Partially Observable Environments
Figure 4 for Learning Finite-State Controllers for Partially Observable Environments
Viaarxiv icon

Solving POMDPs by Searching the Space of Finite Policies

Add code
Bookmark button
Alert button
Jan 23, 2013
Nicolas Meuleau, Kee-Eung Kim, Leslie Pack Kaelbling, Anthony R. Cassandra

Figure 1 for Solving POMDPs by Searching the Space of Finite Policies
Figure 2 for Solving POMDPs by Searching the Space of Finite Policies
Figure 3 for Solving POMDPs by Searching the Space of Finite Policies
Figure 4 for Solving POMDPs by Searching the Space of Finite Policies
Viaarxiv icon

Adaptive Importance Sampling for Estimation in Structured Domains

Add code
Bookmark button
Alert button
Jan 16, 2013
Luis E. Ortiz, Leslie Pack Kaelbling

Figure 1 for Adaptive Importance Sampling for Estimation in Structured Domains
Figure 2 for Adaptive Importance Sampling for Estimation in Structured Domains
Figure 3 for Adaptive Importance Sampling for Estimation in Structured Domains
Viaarxiv icon

The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 12, 2012
Sarah Finney, Natalia Gardiol, Leslie Pack Kaelbling, Tim Oates

Figure 1 for The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Figure 2 for The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Figure 3 for The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Figure 4 for The Thing That We Tried Didn't Work Very Well : Deictic Representation in Reinforcement Learning
Viaarxiv icon