Alert button
Picture for Rainer Stiefelhagen

Rainer Stiefelhagen

Alert button

Delivering Arbitrary-Modal Semantic Segmentation

Add code
Bookmark button
Alert button
Mar 02, 2023
Jiaming Zhang, Ruiping Liu, Hao Shi, Kailun Yang, Simon Reiß, Kunyu Peng, Haodong Fu, Kaiwei Wang, Rainer Stiefelhagen

Figure 1 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 2 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 3 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 4 for Delivering Arbitrary-Modal Semantic Segmentation
Viaarxiv icon

MuscleMap: Towards Video-based Activated Muscle Group Estimation

Add code
Bookmark button
Alert button
Mar 02, 2023
Kunyu Peng, David Schneider, Alina Roitberg, Kailun Yang, Jiaming Zhang, M. Saquib Sarfraz, Rainer Stiefelhagen

Figure 1 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 2 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 3 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 4 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Viaarxiv icon

MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments

Add code
Bookmark button
Alert button
Feb 28, 2023
Junwei Zheng, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer Stiefelhagen

Figure 1 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 2 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 3 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 4 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Viaarxiv icon

Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues

Add code
Bookmark button
Alert button
Jan 24, 2023
Verena Jasmin Hallitschke, Tobias Schlumberger, Philipp Kataliakos, Zdravko Marinov, Moon Kim, Lars Heiliger, Constantin Seibold, Jens Kleesiek, Rainer Stiefelhagen

Figure 1 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 2 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 3 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 4 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Viaarxiv icon

Uncertainty-aware Vision-based Metric Cross-view Geolocalization

Add code
Bookmark button
Alert button
Nov 22, 2022
Florian Fervers, Sebastian Bullinger, Christoph Bodensteiner, Michael Arens, Rainer Stiefelhagen

Figure 1 for Uncertainty-aware Vision-based Metric Cross-view Geolocalization
Figure 2 for Uncertainty-aware Vision-based Metric Cross-view Geolocalization
Figure 3 for Uncertainty-aware Vision-based Metric Cross-view Geolocalization
Figure 4 for Uncertainty-aware Vision-based Metric Cross-view Geolocalization
Viaarxiv icon

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

Add code
Bookmark button
Alert button
Oct 23, 2022
Zeyun Zhong, David Schneider, Michael Voit, Rainer Stiefelhagen, Jürgen Beyerer

Figure 1 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 2 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 3 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 4 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Viaarxiv icon

Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding

Add code
Bookmark button
Alert button
Oct 07, 2022
Constantin Seibold, Simon Reiß, Saquib Sarfraz, Matthias A. Fink, Victoria Mayer, Jan Sellner, Moon Sung Kim, Klaus H. Maier-Hein, Jens Kleesiek, Rainer Stiefelhagen

Figure 1 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 2 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 3 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 4 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Viaarxiv icon

AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier

Add code
Bookmark button
Alert button
Sep 02, 2022
Lars Heiliger, Zdravko Marinov, André Ferreira, Jana Fragemann, Jacob Murray, David Kersting, Rainer Stiefelhagen, Jens Kleesiek

Figure 1 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 2 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 3 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 4 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Viaarxiv icon

ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization

Add code
Bookmark button
Alert button
Aug 19, 2022
Zdravko Marinov, Alina Roitberg, David Schneider, Rainer Stiefelhagen

Figure 1 for ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization
Figure 2 for ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization
Figure 3 for ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization
Figure 4 for ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization
Viaarxiv icon

Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living

Add code
Bookmark button
Alert button
Aug 03, 2022
Zdravko Marinov, David Schneider, Alina Roitberg, Rainer Stiefelhagen

Figure 1 for Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living
Figure 2 for Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living
Figure 3 for Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living
Figure 4 for Multimodal Generation of Novel Action Appearances for Synthetic-to-Real Recognition of Activities of Daily Living
Viaarxiv icon