Alert button
Picture for Stefan Milz

Stefan Milz

Alert button

From Chaos to Calibration: A Geometric Mutual Information Approach to Target-Free Camera LiDAR Extrinsic Calibration

Add code
Bookmark button
Alert button
Nov 03, 2023
Jack Borer, Jeremy Tschirner, Florian Ölsner, Stefan Milz

Viaarxiv icon

LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving

Add code
Bookmark button
Alert button
Jul 17, 2023
Sambit Mohapatra, Senthil Yogamani, Varun Ravi Kumar, Stefan Milz, Heinrich Gotzig, Patrick Mäder

Figure 1 for LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
Figure 2 for LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
Figure 3 for LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
Figure 4 for LiDAR-BEVMTN: Real-Time LiDAR Bird's-Eye View Multi-Task Perception Network for Autonomous Driving
Viaarxiv icon

Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR

Add code
Bookmark button
Alert button
Jun 22, 2023
Jack Borer, Jeremy Tschirner, Florian Ölsner, Stefan Milz

Figure 1 for Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR
Figure 2 for Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR
Figure 3 for Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR
Figure 4 for Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR
Viaarxiv icon

LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation

Add code
Bookmark button
Alert button
Nov 08, 2021
Sambit Mohapatra, Mona Hodaei, Senthil Yogamani, Stefan Milz, Patrick Maeder, Heinrich Gotzig, Martin Simon, Hazem Rashed

Figure 1 for LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation
Figure 2 for LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation
Figure 3 for LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation
Figure 4 for LiMoSeg: Real-time Bird's Eye View based LiDAR Motion Segmentation
Viaarxiv icon

BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving

Add code
Bookmark button
Alert button
Apr 21, 2021
Sambit Mohapatra, Senthil Yogamani, Heinrich Gotzig, Stefan Milz, Patrick Mader

Figure 1 for BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving
Figure 2 for BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving
Figure 3 for BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving
Figure 4 for BEVDetNet: Bird's Eye View LiDAR Point Cloud based Real-time 3D Object Detection for Autonomous Driving
Viaarxiv icon

SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras

Add code
Bookmark button
Alert button
Apr 09, 2021
Varun Ravi Kumar, Marvin Klingner, Senthil Yogamani, Markus Bach, Stefan Milz, Tim Fingscheidt, Patrick Mäder

Figure 1 for SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras
Figure 2 for SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras
Figure 3 for SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras
Figure 4 for SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround View Fisheye Cameras
Viaarxiv icon

OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving

Add code
Bookmark button
Alert button
Feb 15, 2021
Varun Ravi Kumar, Senthil Yogamani, Hazem Rashed, Ganesh Sitsu, Christian Witt, Isabelle Leang, Stefan Milz, Patrick Mäder

Figure 1 for OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving
Figure 2 for OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving
Figure 3 for OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving
Figure 4 for OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving
Viaarxiv icon

SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving

Add code
Bookmark button
Alert button
Aug 10, 2020
Varun Ravi Kumar, Marvin Klingner, Senthil Yogamani, Stefan Milz, Tim Fingscheidt, Patrick Maeder

Figure 1 for SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving
Figure 2 for SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving
Figure 3 for SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving
Figure 4 for SynDistNet: Self-Supervised Monocular Fisheye Camera Distance Estimation Synergized with Semantic Segmentation for Autonomous Driving
Viaarxiv icon

UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models

Add code
Bookmark button
Alert button
Jul 26, 2020
Varun Ravi Kumar, Senthil Yogamani, Markus Bach, Christian Witt, Stefan Milz, Patrick Mader

Figure 1 for UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
Figure 2 for UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
Figure 3 for UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
Figure 4 for UnRectDepthNet: Self-Supervised Monocular Depth Estimation using a Generic Framework for Handling Common Camera Distortion Models
Viaarxiv icon