Alert button
Picture for Tim Seyde

Tim Seyde

Alert button

Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution

Add code
Bookmark button
Alert button
Apr 05, 2024
Tim Seyde, Peter Werner, Wilko Schwarting, Markus Wulfmeier, Daniela Rus

Viaarxiv icon

Cooperative Flight Control Using Visual-Attention -- Air-Guardian

Add code
Bookmark button
Alert button
Dec 21, 2022
Lianhao Yin, Tsun-Hsuan Wang, Makram Chahine, Tim Seyde, Mathias Lechner, Ramin Hasani, Daniela Rus

Figure 1 for Cooperative Flight Control Using Visual-Attention -- Air-Guardian
Figure 2 for Cooperative Flight Control Using Visual-Attention -- Air-Guardian
Figure 3 for Cooperative Flight Control Using Visual-Attention -- Air-Guardian
Figure 4 for Cooperative Flight Control Using Visual-Attention -- Air-Guardian
Viaarxiv icon

Solving Continuous Control via Q-learning

Add code
Bookmark button
Alert button
Oct 22, 2022
Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin Riedmiller, Daniela Rus, Markus Wulfmeier

Figure 1 for Solving Continuous Control via Q-learning
Figure 2 for Solving Continuous Control via Q-learning
Figure 3 for Solving Continuous Control via Q-learning
Figure 4 for Solving Continuous Control via Q-learning
Viaarxiv icon

Interpreting Neural Policies with Disentangled Tree Representations

Add code
Bookmark button
Alert button
Oct 13, 2022
Tsun-Hsuan Wang, Wei Xiao, Tim Seyde, Ramin Hasani, Daniela Rus

Figure 1 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 2 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 3 for Interpreting Neural Policies with Disentangled Tree Representations
Figure 4 for Interpreting Neural Policies with Disentangled Tree Representations
Viaarxiv icon

Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks

Add code
Bookmark button
Alert button
May 18, 2022
Ryan Sander, Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Sertac Karaman, Daniela Rus

Figure 1 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 2 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 3 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 4 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Viaarxiv icon

Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

Add code
Bookmark button
Alert button
Nov 03, 2021
Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin Riedmiller, Markus Wulfmeier, Daniela Rus

Figure 1 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 2 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 3 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 4 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Viaarxiv icon

Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space

Add code
Bookmark button
Alert button
Feb 19, 2021
Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus

Figure 1 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 2 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 3 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 4 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Viaarxiv icon

Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles

Add code
Bookmark button
Alert button
Oct 27, 2020
Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus

Figure 1 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 2 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 3 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 4 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Viaarxiv icon

Locomotion Planning through a Hybrid Bayesian Trajectory Optimization

Add code
Bookmark button
Alert button
Mar 09, 2019
Tim Seyde, Jan Carius, Ruben Grandia, Farbod Farshidian, Marco Hutter

Figure 1 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 2 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 3 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Figure 4 for Locomotion Planning through a Hybrid Bayesian Trajectory Optimization
Viaarxiv icon