Picture for Zachary N. Sunberg

Zachary N. Sunberg

Sampling-based Task and Kinodynamic Motion Planning under Semantic Uncertainty

Add code
Apr 01, 2026
Viaarxiv icon

Resolving Multiple-Dynamic Model Uncertainty in Hypothesis-Driven Belief-MDPs

Add code
Nov 21, 2024
Figure 1 for Resolving Multiple-Dynamic Model Uncertainty in Hypothesis-Driven Belief-MDPs
Figure 2 for Resolving Multiple-Dynamic Model Uncertainty in Hypothesis-Driven Belief-MDPs
Figure 3 for Resolving Multiple-Dynamic Model Uncertainty in Hypothesis-Driven Belief-MDPs
Figure 4 for Resolving Multiple-Dynamic Model Uncertainty in Hypothesis-Driven Belief-MDPs
Viaarxiv icon

Rao-Blackwellized POMDP Planning

Add code
Sep 24, 2024
Figure 1 for Rao-Blackwellized POMDP Planning
Figure 2 for Rao-Blackwellized POMDP Planning
Figure 3 for Rao-Blackwellized POMDP Planning
Figure 4 for Rao-Blackwellized POMDP Planning
Viaarxiv icon

Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives

Add code
Jun 05, 2024
Figure 1 for Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives
Figure 2 for Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives
Figure 3 for Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives
Figure 4 for Sound Heuristic Search Value Iteration for Undiscounted POMDPs with Reachability Objectives
Viaarxiv icon

Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning

Add code
Feb 29, 2024
Figure 1 for Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning
Figure 2 for Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning
Figure 3 for Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning
Figure 4 for Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning
Viaarxiv icon

Recursively-Constrained Partially Observable Markov Decision Processes

Add code
Oct 15, 2023
Figure 1 for Recursively-Constrained Partially Observable Markov Decision Processes
Figure 2 for Recursively-Constrained Partially Observable Markov Decision Processes
Figure 3 for Recursively-Constrained Partially Observable Markov Decision Processes
Figure 4 for Recursively-Constrained Partially Observable Markov Decision Processes
Viaarxiv icon

Explanation through Reward Model Reconciliation using POMDP Tree Search

Add code
May 01, 2023
Figure 1 for Explanation through Reward Model Reconciliation using POMDP Tree Search
Figure 2 for Explanation through Reward Model Reconciliation using POMDP Tree Search
Viaarxiv icon

Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems

Add code
Apr 14, 2023
Figure 1 for Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems
Figure 2 for Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems
Figure 3 for Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems
Figure 4 for Sampling-based Reactive Synthesis for Nondeterministic Hybrid Systems
Viaarxiv icon

Planning with SiMBA: Motion Planning under Uncertainty for Temporal Goals using Simplified Belief Guides

Add code
Oct 18, 2022
Figure 1 for Planning with SiMBA: Motion Planning under Uncertainty for Temporal Goals using Simplified Belief Guides
Figure 2 for Planning with SiMBA: Motion Planning under Uncertainty for Temporal Goals using Simplified Belief Guides
Figure 3 for Planning with SiMBA: Motion Planning under Uncertainty for Temporal Goals using Simplified Belief Guides
Figure 4 for Planning with SiMBA: Motion Planning under Uncertainty for Temporal Goals using Simplified Belief Guides
Viaarxiv icon

Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation

Add code
Oct 10, 2022
Figure 1 for Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation
Figure 2 for Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation
Figure 3 for Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation
Figure 4 for Generalized Optimality Guarantees for Solving Continuous Observation POMDPs through Particle Belief MDP Approximation
Viaarxiv icon