Picture for Christopher Grimm

Christopher Grimm

Leveraging Transformer Decoder for Automotive Radar Object Detection

Add code
Jan 19, 2026
Viaarxiv icon

Synthetic FMCW Radar Range Azimuth Maps Augmentation with Generative Diffusion Model

Add code
Jan 09, 2026
Viaarxiv icon

Proper Value Equivalence

Add code
Jun 18, 2021
Figure 1 for Proper Value Equivalence
Figure 2 for Proper Value Equivalence
Figure 3 for Proper Value Equivalence
Figure 4 for Proper Value Equivalence
Viaarxiv icon

Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications

Add code
Dec 23, 2020
Figure 1 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 2 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 3 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Figure 4 for Warping of Radar Data into Camera Image for Cross-Modal Supervision in Automotive Applications
Viaarxiv icon

The Value Equivalence Principle for Model-Based Reinforcement Learning

Add code
Nov 06, 2020
Figure 1 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 2 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 3 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Figure 4 for The Value Equivalence Principle for Model-Based Reinforcement Learning
Viaarxiv icon

Disentangled Cumulants Help Successor Representations Transfer to New Tasks

Add code
Nov 25, 2019
Figure 1 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 2 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 3 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Figure 4 for Disentangled Cumulants Help Successor Representations Transfer to New Tasks
Viaarxiv icon

Learning Independently-Obtainable Reward Functions

Add code
Jan 31, 2019
Figure 1 for Learning Independently-Obtainable Reward Functions
Figure 2 for Learning Independently-Obtainable Reward Functions
Figure 3 for Learning Independently-Obtainable Reward Functions
Figure 4 for Learning Independently-Obtainable Reward Functions
Viaarxiv icon

Mitigating Planner Overfitting in Model-Based Reinforcement Learning

Add code
Dec 03, 2018
Figure 1 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 2 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 3 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Figure 4 for Mitigating Planner Overfitting in Model-Based Reinforcement Learning
Viaarxiv icon

Deep Abstract Q-Networks

Add code
Aug 25, 2018
Figure 1 for Deep Abstract Q-Networks
Figure 2 for Deep Abstract Q-Networks
Figure 3 for Deep Abstract Q-Networks
Figure 4 for Deep Abstract Q-Networks
Viaarxiv icon

Modeling Latent Attention Within Neural Networks

Add code
Dec 30, 2017
Figure 1 for Modeling Latent Attention Within Neural Networks
Figure 2 for Modeling Latent Attention Within Neural Networks
Figure 3 for Modeling Latent Attention Within Neural Networks
Figure 4 for Modeling Latent Attention Within Neural Networks
Viaarxiv icon