Alert button
Picture for Wilko Schwarting

Wilko Schwarting

Alert button

Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution

Add code
Bookmark button
Alert button
Apr 05, 2024
Tim Seyde, Peter Werner, Wilko Schwarting, Markus Wulfmeier, Daniela Rus

Viaarxiv icon

OptFlow: Fast Optimization-based Scene Flow Estimation without Supervision

Add code
Bookmark button
Alert button
Jan 04, 2024
Rahul Ahuja, Chris Baker, Wilko Schwarting

Viaarxiv icon

Solving Continuous Control via Q-learning

Add code
Bookmark button
Alert button
Oct 22, 2022
Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin Riedmiller, Daniela Rus, Markus Wulfmeier

Figure 1 for Solving Continuous Control via Q-learning
Figure 2 for Solving Continuous Control via Q-learning
Figure 3 for Solving Continuous Control via Q-learning
Figure 4 for Solving Continuous Control via Q-learning
Viaarxiv icon

Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks

Add code
Bookmark button
Alert button
May 18, 2022
Ryan Sander, Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Sertac Karaman, Daniela Rus

Figure 1 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 2 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 3 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Figure 4 for Neighborhood Mixup Experience Replay: Local Convex Interpolation for Improved Sample Efficiency in Continuous Control Tasks
Viaarxiv icon

Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models

Add code
Bookmark button
Alert button
Apr 05, 2022
Jose L. Vazquez, Alexander Liniger, Wilko Schwarting, Daniela Rus, Luc Van Gool

Figure 1 for Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models
Figure 2 for Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models
Figure 3 for Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models
Figure 4 for Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models
Viaarxiv icon

Learning Interactive Driving Policies via Data-driven Simulation

Add code
Bookmark button
Alert button
Nov 23, 2021
Tsun-Hsuan Wang, Alexander Amini, Wilko Schwarting, Igor Gilitschenski, Sertac Karaman, Daniela Rus

Figure 1 for Learning Interactive Driving Policies via Data-driven Simulation
Figure 2 for Learning Interactive Driving Policies via Data-driven Simulation
Figure 3 for Learning Interactive Driving Policies via Data-driven Simulation
Figure 4 for Learning Interactive Driving Policies via Data-driven Simulation
Viaarxiv icon

VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles

Add code
Bookmark button
Alert button
Nov 23, 2021
Alexander Amini, Tsun-Hsuan Wang, Igor Gilitschenski, Wilko Schwarting, Zhijian Liu, Song Han, Sertac Karaman, Daniela Rus

Figure 1 for VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles
Figure 2 for VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles
Figure 3 for VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles
Figure 4 for VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and Policy Learning for Autonomous Vehicles
Viaarxiv icon

Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

Add code
Bookmark button
Alert button
Nov 03, 2021
Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin Riedmiller, Markus Wulfmeier, Daniela Rus

Figure 1 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 2 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 3 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 4 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Viaarxiv icon

Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space

Add code
Bookmark button
Alert button
Feb 19, 2021
Wilko Schwarting, Tim Seyde, Igor Gilitschenski, Lucas Liebenwein, Ryan Sander, Sertac Karaman, Daniela Rus

Figure 1 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 2 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 3 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Figure 4 for Deep Latent Competition: Learning to Race Using Visual Control Policies in Latent Space
Viaarxiv icon

Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles

Add code
Bookmark button
Alert button
Oct 27, 2020
Tim Seyde, Wilko Schwarting, Sertac Karaman, Daniela Rus

Figure 1 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 2 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 3 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Figure 4 for Learning to Plan Optimistically: Uncertainty-Guided Deep Exploration via Latent Model Ensembles
Viaarxiv icon