Picture for Michael L. Littman

Michael L. Littman

Rutgers University

The Complexity of Plan Existence and Evaluation in Probabilistic Domains

Add code
Feb 06, 2013
Figure 1 for The Complexity of Plan Existence and Evaluation in Probabilistic Domains
Figure 2 for The Complexity of Plan Existence and Evaluation in Probabilistic Domains
Figure 3 for The Complexity of Plan Existence and Evaluation in Probabilistic Domains
Figure 4 for The Complexity of Plan Existence and Evaluation in Probabilistic Domains
Viaarxiv icon

Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes

Add code
Feb 06, 2013
Figure 1 for Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes
Figure 2 for Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes
Figure 3 for Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes
Figure 4 for Incremental Pruning: A Simple, Fast, Exact Method for Partially Observable Markov Decision Processes
Viaarxiv icon

On the Computational Complexity of Stochastic Controller Optimization in POMDPs

Add code
Oct 04, 2012
Viaarxiv icon

Incremental Model-based Learners With Formal Learning-Time Guarantees

Add code
Jun 27, 2012
Figure 1 for Incremental Model-based Learners With Formal Learning-Time Guarantees
Viaarxiv icon

CORL: A Continuous-state Offset-dynamics Reinforcement Learner

Add code
Jun 13, 2012
Figure 1 for CORL: A Continuous-state Offset-dynamics Reinforcement Learner
Figure 2 for CORL: A Continuous-state Offset-dynamics Reinforcement Learner
Figure 3 for CORL: A Continuous-state Offset-dynamics Reinforcement Learner
Viaarxiv icon

Exploring compact reinforcement-learning representations with linear regression

Add code
May 09, 2012
Figure 1 for Exploring compact reinforcement-learning representations with linear regression
Figure 2 for Exploring compact reinforcement-learning representations with linear regression
Figure 3 for Exploring compact reinforcement-learning representations with linear regression
Figure 4 for Exploring compact reinforcement-learning representations with linear regression
Viaarxiv icon

A Bayesian Sampling Approach to Exploration in Reinforcement Learning

Add code
May 09, 2012
Figure 1 for A Bayesian Sampling Approach to Exploration in Reinforcement Learning
Figure 2 for A Bayesian Sampling Approach to Exploration in Reinforcement Learning
Figure 3 for A Bayesian Sampling Approach to Exploration in Reinforcement Learning
Figure 4 for A Bayesian Sampling Approach to Exploration in Reinforcement Learning
Viaarxiv icon

Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search

Add code
Feb 14, 2012
Figure 1 for Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search
Figure 2 for Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search
Figure 3 for Learning is planning: near Bayes-optimal reinforcement learning via Monte-Carlo tree search
Viaarxiv icon

Corpus-based Learning of Analogies and Semantic Relations

Add code
Aug 23, 2005
Figure 1 for Corpus-based Learning of Analogies and Semantic Relations
Figure 2 for Corpus-based Learning of Analogies and Semantic Relations
Figure 3 for Corpus-based Learning of Analogies and Semantic Relations
Figure 4 for Corpus-based Learning of Analogies and Semantic Relations
Viaarxiv icon

Combining Independent Modules in Lexical Multiple-Choice Problems

Add code
Jan 10, 2005
Figure 1 for Combining Independent Modules in Lexical Multiple-Choice Problems
Figure 2 for Combining Independent Modules in Lexical Multiple-Choice Problems
Figure 3 for Combining Independent Modules in Lexical Multiple-Choice Problems
Viaarxiv icon