Alert button
Picture for Joschka Boedecker

Joschka Boedecker

Alert button

Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles

Add code
Bookmark button
Alert button
Mar 29, 2023
Mehdi Naouar, Gabriel Kalweit, Ignacio Mastroleo, Philipp Poxleitner, Marc Metzger, Joschka Boedecker, Maria Kalweit

Figure 1 for Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles
Figure 2 for Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles
Figure 3 for Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles
Figure 4 for Robust Tumor Detection from Coarse Annotations via Multi-Magnification Ensembles
Viaarxiv icon

Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving

Add code
Bookmark button
Alert button
Jan 30, 2023
Yuan Zhang, Joschka Boedecker, Chuxuan Li, Guyue Zhou

Figure 1 for Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving
Figure 2 for Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving
Figure 3 for Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving
Figure 4 for Incorporating Recurrent Reinforcement Learning into Model Predictive Control for Adaptive Control in Autonomous Driving
Viaarxiv icon

A Hierarchical Approach for Strategic Motion Planning in Autonomous Racing

Add code
Bookmark button
Alert button
Dec 03, 2022
Rudolf Reiter, Jasper Hoffmann, Joschka Boedecker, Moritz Diehl

Figure 1 for A Hierarchical Approach for Strategic Motion Planning in Autonomous Racing
Figure 2 for A Hierarchical Approach for Strategic Motion Planning in Autonomous Racing
Figure 3 for A Hierarchical Approach for Strategic Motion Planning in Autonomous Racing
Figure 4 for A Hierarchical Approach for Strategic Motion Planning in Autonomous Racing
Viaarxiv icon

On the calibration of underrepresented classes in LiDAR-based semantic segmentation

Add code
Bookmark button
Alert button
Oct 13, 2022
Mariella Dreissig, Florian Piewak, Joschka Boedecker

Figure 1 for On the calibration of underrepresented classes in LiDAR-based semantic segmentation
Figure 2 for On the calibration of underrepresented classes in LiDAR-based semantic segmentation
Figure 3 for On the calibration of underrepresented classes in LiDAR-based semantic segmentation
Figure 4 for On the calibration of underrepresented classes in LiDAR-based semantic segmentation
Viaarxiv icon

Latent Plans for Task-Agnostic Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 19, 2022
Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Figure 1 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 2 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 3 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 4 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Viaarxiv icon

Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization

Add code
Bookmark button
Alert button
Jul 05, 2022
Yuan Zhang, Jianhong Wang, Joschka Boedecker

Figure 1 for Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization
Figure 2 for Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization
Figure 3 for Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization
Figure 4 for Robust Reinforcement Learning in Continuous Control Tasks with Uncertainty Set Regularization
Viaarxiv icon

Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 21, 2022
Branka Mirchevska, Moritz Werling, Joschka Boedecker

Figure 1 for Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning
Figure 2 for Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning
Figure 3 for Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning
Figure 4 for Optimizing Trajectories for Highway Driving with Offline Reinforcement Learning
Viaarxiv icon

Affordance Learning from Play for Sample-Efficient Policy Learning

Add code
Bookmark button
Alert button
Mar 01, 2022
Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Figure 1 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 2 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 3 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 4 for Affordance Learning from Play for Sample-Efficient Policy Learning
Viaarxiv icon

Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 24, 2021
Nicolai Dorka, Joschka Boedecker, Wolfram Burgard

Figure 1 for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning
Figure 2 for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning
Figure 3 for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning
Figure 4 for Adaptively Calibrated Critic Estimates for Deep Reinforcement Learning
Viaarxiv icon

Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation

Add code
Bookmark button
Alert button
Oct 07, 2021
Eugenio Chisari, Tim Welschehold, Joschka Boedecker, Wolfram Burgard, Abhinav Valada

Figure 1 for Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation
Figure 2 for Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation
Figure 3 for Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation
Figure 4 for Correct Me if I am Wrong: Interactive Learning for Robotic Manipulation
Viaarxiv icon