Alert button
Picture for Sebastian Junges

Sebastian Junges

Alert button

Factored Online Planning in Many-Agent POMDPs

Dec 22, 2023
Maris F. L. Galesloot, Thiago D. Simão, Sebastian Junges, Nils Jansen

Viaarxiv icon

Learning Formal Specifications from Membership and Preference Queries

Jul 19, 2023
Ameesh Shah, Marcell Vazquez-Chanlatte, Sebastian Junges, Sanjit A. Seshia

Figure 1 for Learning Formal Specifications from Membership and Preference Queries
Figure 2 for Learning Formal Specifications from Membership and Preference Queries
Figure 3 for Learning Formal Specifications from Membership and Preference Queries
Figure 4 for Learning Formal Specifications from Membership and Preference Queries
Viaarxiv icon

Efficient Sensitivity Analysis for Parametric Robust Markov Chains

May 01, 2023
Thom Badings, Sebastian Junges, Ahmadreza Marandi, Ufuk Topcu, Nils Jansen

Figure 1 for Efficient Sensitivity Analysis for Parametric Robust Markov Chains
Figure 2 for Efficient Sensitivity Analysis for Parametric Robust Markov Chains
Figure 3 for Efficient Sensitivity Analysis for Parametric Robust Markov Chains
Figure 4 for Efficient Sensitivity Analysis for Parametric Robust Markov Chains
Viaarxiv icon

COOL-MC: A Comprehensive Tool for Reinforcement Learning and Model Checking

Sep 15, 2022
Dennis Gross, Nils Jansen, Sebastian Junges, Guillermo A. Perez

Figure 1 for COOL-MC: A Comprehensive Tool for Reinforcement Learning and Model Checking
Viaarxiv icon

Abstraction-Refinement for Hierarchical Probabilistic Models

Jun 06, 2022
Sebastian Junges, Matthijs T. J. Spaan

Figure 1 for Abstraction-Refinement for Hierarchical Probabilistic Models
Figure 2 for Abstraction-Refinement for Hierarchical Probabilistic Models
Figure 3 for Abstraction-Refinement for Hierarchical Probabilistic Models
Figure 4 for Abstraction-Refinement for Hierarchical Probabilistic Models
Viaarxiv icon

Safe Reinforcement Learning via Shielding for POMDPs

Apr 02, 2022
Steven Carr, Nils Jansen, Sebastian Junges, Ufuk Topcu

Figure 1 for Safe Reinforcement Learning via Shielding for POMDPs
Figure 2 for Safe Reinforcement Learning via Shielding for POMDPs
Figure 3 for Safe Reinforcement Learning via Shielding for POMDPs
Figure 4 for Safe Reinforcement Learning via Shielding for POMDPs
Viaarxiv icon

Querying Labelled Data with Scenario Programs for Sim-to-Real Validation

Dec 01, 2021
Edward Kim, Jay Shenoy, Sebastian Junges, Daniel Fremont, Alberto Sangiovanni-Vincentelli, Sanjit Seshia

Figure 1 for Querying Labelled Data with Scenario Programs for Sim-to-Real Validation
Figure 2 for Querying Labelled Data with Scenario Programs for Sim-to-Real Validation
Figure 3 for Querying Labelled Data with Scenario Programs for Sim-to-Real Validation
Figure 4 for Querying Labelled Data with Scenario Programs for Sim-to-Real Validation
Viaarxiv icon

Convex Optimization for Parameter Synthesis in MDPs

Jun 30, 2021
Murat Cubuktepe, Nils Jansen, Sebastian Junges, Joost-Pieter Katoen, Ufuk Topcu

Figure 1 for Convex Optimization for Parameter Synthesis in MDPs
Figure 2 for Convex Optimization for Parameter Synthesis in MDPs
Figure 3 for Convex Optimization for Parameter Synthesis in MDPs
Figure 4 for Convex Optimization for Parameter Synthesis in MDPs
Viaarxiv icon

Runtime Monitoring for Markov Decision Processes

May 26, 2021
Sebastian Junges, Hazem Torfah, Sanjit A. Seshia

Figure 1 for Runtime Monitoring for Markov Decision Processes
Figure 2 for Runtime Monitoring for Markov Decision Processes
Figure 3 for Runtime Monitoring for Markov Decision Processes
Figure 4 for Runtime Monitoring for Markov Decision Processes
Viaarxiv icon

Entropy-Guided Control Improvisation

Mar 09, 2021
Marcell Vazquez-Chanlatte, Sebastian Junges, Daniel J. Fremont, Sanjit Seshia

Figure 1 for Entropy-Guided Control Improvisation
Figure 2 for Entropy-Guided Control Improvisation
Figure 3 for Entropy-Guided Control Improvisation
Figure 4 for Entropy-Guided Control Improvisation
Viaarxiv icon