Alert button
Picture for Jonathan P. How

Jonathan P. How

Alert button

Massachusetts Institute of Technology

Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer

Add code
Bookmark button
Alert button
Jul 09, 2020
Chuangchuang Sun, Dong-Ki Kim, Jonathan P. How

Figure 1 for Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer
Figure 2 for Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer
Figure 3 for Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer
Figure 4 for Set-Invariant Constrained Reinforcement Learning with a Meta-Optimizer
Viaarxiv icon

Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]

Add code
Bookmark button
Alert button
Jun 01, 2020
Kristoffer M. Frey, Ted J. Steiner, Jonathan P. How

Figure 1 for Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]
Figure 2 for Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]
Figure 3 for Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]
Figure 4 for Collision Probabilities for Continuous-Time Systems Without Sampling [with Appendices]
Viaarxiv icon

Certified Adversarial Robustness for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 11, 2020
Michael Everett, Bjorn Lutjens, Jonathan P. How

Figure 1 for Certified Adversarial Robustness for Deep Reinforcement Learning
Figure 2 for Certified Adversarial Robustness for Deep Reinforcement Learning
Figure 3 for Certified Adversarial Robustness for Deep Reinforcement Learning
Figure 4 for Certified Adversarial Robustness for Deep Reinforcement Learning
Viaarxiv icon

Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments

Add code
Bookmark button
Alert button
Mar 10, 2020
Stewart Jamieson, Jonathan P. How, Yogesh Girdhar

Figure 1 for Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments
Figure 2 for Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments
Figure 3 for Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments
Figure 4 for Active Reward Learning for Co-Robotic Vision Based Exploration in Bandwidth Limited Environments
Viaarxiv icon

Asynchronous and Parallel Distributed Pose Graph Optimization

Add code
Bookmark button
Alert button
Mar 06, 2020
Yulun Tian, Alec Koppel, Amrit Singh Bedi, Jonathan P. How

Figure 1 for Asynchronous and Parallel Distributed Pose Graph Optimization
Figure 2 for Asynchronous and Parallel Distributed Pose Graph Optimization
Figure 3 for Asynchronous and Parallel Distributed Pose Graph Optimization
Figure 4 for Asynchronous and Parallel Distributed Pose Graph Optimization
Viaarxiv icon

Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor

Add code
Bookmark button
Alert button
Mar 04, 2020
Andrea Tagliabue, Aleix Paris, Suhan Kim, Regan Kubicek, Sarah Bergbreiter, Jonathan P. How

Figure 1 for Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor
Figure 2 for Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor
Figure 3 for Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor
Figure 4 for Touch the Wind: Simultaneous Airflow, Drag and Interaction Sensing on a Multirotor
Viaarxiv icon

A Distributed Pipeline for Scalable, Deconflicted Formation Flying

Add code
Bookmark button
Alert button
Mar 04, 2020
Parker C. Lusk, Xiaoyi Cai, Samir Wadhwania, Aleix Paris, Kaveh Fathian, Jonathan P. How

Figure 1 for A Distributed Pipeline for Scalable, Deconflicted Formation Flying
Figure 2 for A Distributed Pipeline for Scalable, Deconflicted Formation Flying
Figure 3 for A Distributed Pipeline for Scalable, Deconflicted Formation Flying
Figure 4 for A Distributed Pipeline for Scalable, Deconflicted Formation Flying
Viaarxiv icon

Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph

Add code
Bookmark button
Alert button
Mar 03, 2020
Chuangchuang Sun, Macheng Shen, Jonathan P. How

Figure 1 for Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph
Figure 2 for Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph
Figure 3 for Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph
Figure 4 for Scaling Up Multiagent Reinforcement Learning for Robotic Systems: Learn an Adaptive Sparse Communication Graph
Viaarxiv icon

R-MADDPG for Partially Observable Environments and Limited Communication

Add code
Bookmark button
Alert button
Feb 18, 2020
Rose E. Wang, Michael Everett, Jonathan P. How

Figure 1 for R-MADDPG for Partially Observable Environments and Limited Communication
Figure 2 for R-MADDPG for Partially Observable Environments and Limited Communication
Figure 3 for R-MADDPG for Partially Observable Environments and Limited Communication
Figure 4 for R-MADDPG for Partially Observable Environments and Limited Communication
Viaarxiv icon