Alert button
Picture for Didier Stricker

Didier Stricker

Alert button

PlaneSegNet: Fast and Robust Plane Estimation Using a Single-stage Instance Segmentation CNN

Mar 29, 2021
Yaxu Xie, Jason Rambach, Fangwen Shu, Didier Stricker

Figure 1 for PlaneSegNet: Fast and Robust Plane Estimation Using a Single-stage Instance Segmentation CNN
Figure 2 for PlaneSegNet: Fast and Robust Plane Estimation Using a Single-stage Instance Segmentation CNN
Figure 3 for PlaneSegNet: Fast and Robust Plane Estimation Using a Single-stage Instance Segmentation CNN
Figure 4 for PlaneSegNet: Fast and Robust Plane Estimation Using a Single-stage Instance Segmentation CNN
Viaarxiv icon

TICaM: A Time-of-flight In-car Cabin Monitoring Dataset

Mar 23, 2021
Jigyasa Singh Katrolia, Bruno Mirbach, Ahmed El-Sherif, Hartmut Feld, Jason Rambach, Didier Stricker

Figure 1 for TICaM: A Time-of-flight In-car Cabin Monitoring Dataset
Figure 2 for TICaM: A Time-of-flight In-car Cabin Monitoring Dataset
Figure 3 for TICaM: A Time-of-flight In-car Cabin Monitoring Dataset
Figure 4 for TICaM: A Time-of-flight In-car Cabin Monitoring Dataset
Viaarxiv icon

SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences

Feb 22, 2021
Dennis Stumpf, Stephan Krauß, Gerd Reis, Oliver Wasenmüller, Didier Stricker

Figure 1 for SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences
Figure 2 for SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences
Figure 3 for SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences
Figure 4 for SALT: A Semi-automatic Labeling Tool for RGB-D Video Sequences
Viaarxiv icon

A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems

Feb 11, 2021
Alexander Schäfer, Gerd Reis, Didier Stricker

Figure 1 for A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems
Figure 2 for A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems
Figure 3 for A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems
Figure 4 for A Survey on Synchronous Augmented, Virtual and Mixed Reality Remote Collaboration Systems
Viaarxiv icon

MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow

Nov 12, 2020
René Schuster, Christian Unger, Didier Stricker

Figure 1 for MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow
Figure 2 for MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow
Figure 3 for MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow
Figure 4 for MonoComb: A Sparse-to-Dense Combination Approach for Monocular Scene Flow
Viaarxiv icon

Illumination Normalization by Partially Impossible Encoder-Decoder Cost Function

Nov 09, 2020
Steve Dias Da Cruz, Bertram Taetz, Thomas Stifter, Didier Stricker

Figure 1 for Illumination Normalization by Partially Impossible Encoder-Decoder Cost Function
Figure 2 for Illumination Normalization by Partially Impossible Encoder-Decoder Cost Function
Figure 3 for Illumination Normalization by Partially Impossible Encoder-Decoder Cost Function
Figure 4 for Illumination Normalization by Partially Impossible Encoder-Decoder Cost Function
Viaarxiv icon

SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment

Nov 06, 2020
Fangwen Shu, Paul Lesur, Yaxu Xie, Alain Pagani, Didier Stricker

Figure 1 for SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment
Figure 2 for SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment
Figure 3 for SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment
Figure 4 for SLAM in the Field: An Evaluation of Monocular Mapping and Localization on Challenging Dynamic Agricultural Environment
Viaarxiv icon

A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions

Nov 04, 2020
René Schuster, Christian Unger, Didier Stricker

Figure 1 for A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
Figure 2 for A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
Figure 3 for A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
Figure 4 for A Deep Temporal Fusion Framework for Scene Flow Using a Learnable Motion Model and Occlusions
Viaarxiv icon