Alert button
Picture for Yash Shukla

Yash Shukla

Alert button

Automaton-Guided Curriculum Generation for Reinforcement Learning Agents

Apr 11, 2023
Yash Shukla, Abhishek Kulkarni, Robert Wright, Alvaro Velasquez, Jivko Sinapov

Figure 1 for Automaton-Guided Curriculum Generation for Reinforcement Learning Agents
Figure 2 for Automaton-Guided Curriculum Generation for Reinforcement Learning Agents
Figure 3 for Automaton-Guided Curriculum Generation for Reinforcement Learning Agents

Despite advances in Reinforcement Learning, many sequential decision making tasks remain prohibitively expensive and impractical to learn. Recently, approaches that automatically generate reward functions from logical task specifications have been proposed to mitigate this issue; however, they scale poorly on long-horizon tasks (i.e., tasks where the agent needs to perform a series of correct actions to reach the goal state, considering future transitions while choosing an action). Employing a curriculum (a sequence of increasingly complex tasks) further improves the learning speed of the agent by sequencing intermediate tasks suited to the learning capacity of the agent. However, generating curricula from the logical specification still remains an unsolved problem. To this end, we propose AGCL, Automaton-guided Curriculum Learning, a novel method for automatically generating curricula for the target task in the form of Directed Acyclic Graphs (DAGs). AGCL encodes the specification in the form of a deterministic finite automaton (DFA), and then uses the DFA along with the Object-Oriented MDP (OOMDP) representation to generate a curriculum as a DAG, where the vertices correspond to tasks, and edges correspond to the direction of knowledge transfer. Experiments in gridworld and physics-based simulated robotics domains show that the curricula produced by AGCL achieve improved time-to-threshold performance on a complex sequential decision-making problem relative to state-of-the-art curriculum learning (e.g, teacher-student, self-play) and automaton-guided reinforcement learning baselines (e.g, Q-Learning for Reward Machines). Further, we demonstrate that AGCL performs well even in the presence of noise in the task's OOMDP description, and also when distractor objects are present that are not modeled in the logical specification of the tasks' objectives.

* To be presented at The International Conference on Automated Planning and Scheduling (ICAPS) 2023 
Viaarxiv icon

RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments

Jun 24, 2022
Shivam Goel, Yash Shukla, Vasanth Sarathy, Matthias Scheutz, Jivko Sinapov

Figure 1 for RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments
Figure 2 for RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments
Figure 3 for RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments
Figure 4 for RAPid-Learn: A Framework for Learning to Recover for Handling Novelties in Open-World Environments

We propose RAPid-Learn: Learning to Recover and Plan Again, a hybrid planning and learning method, to tackle the problem of adapting to sudden and unexpected changes in an agent's environment (i.e., novelties). RAPid-Learn is designed to formulate and solve modifications to a task's Markov Decision Process (MDPs) on-the-fly and is capable of exploiting domain knowledge to learn any new dynamics caused by the environmental changes. It is capable of exploiting the domain knowledge to learn action executors which can be further used to resolve execution impasses, leading to a successful plan execution. This novelty information is reflected in its updated domain model. We demonstrate its efficacy by introducing a wide variety of novelties in a gridworld environment inspired by Minecraft, and compare our algorithm with transfer learning baselines from the literature. Our method is (1) effective even in the presence of multiple novelties, (2) more sample efficient than transfer learning RL baselines, and (3) robust to incomplete model information, as opposed to pure symbolic planning approaches.

* Proceedings of the IEEE Conference on Development and Learning (ICDL 2022) 
Viaarxiv icon

ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments

Apr 11, 2022
Yash Shukla, Christopher Thierauf, Ramtin Hosseini, Gyan Tatiya, Jivko Sinapov

Figure 1 for ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
Figure 2 for ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
Figure 3 for ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments
Figure 4 for ACuTE: Automatic Curriculum Transfer from Simple to Complex Environments

Despite recent advances in Reinforcement Learning (RL), many problems, especially real-world tasks, remain prohibitively expensive to learn. To address this issue, several lines of research have explored how tasks, or data samples themselves, can be sequenced into a curriculum to learn a problem that may otherwise be too difficult to learn from scratch. However, generating and optimizing a curriculum in a realistic scenario still requires extensive interactions with the environment. To address this challenge, we formulate the curriculum transfer problem, in which the schema of a curriculum optimized in a simpler, easy-to-solve environment (e.g., a grid world) is transferred to a complex, realistic scenario (e.g., a physics-based robotics simulation or the real world). We present "ACuTE", Automatic Curriculum Transfer from Simple to Complex Environments, a novel framework to solve this problem, and evaluate our proposed method by comparing it to other baseline approaches (e.g., domain adaptation) designed to speed up learning. We observe that our approach produces improved jumpstart and time-to-threshold performance even when adding task elements that further increase the difficulty of the realistic scenario. Finally, we demonstrate that our approach is independent of the learning algorithm used for curriculum generation, and is Sim2Real transferable to a real world scenario using a physical robot.

Viaarxiv icon