Picture for Rainer Stiefelhagen

Rainer Stiefelhagen

MuscleMap: Towards Video-based Activated Muscle Group Estimation

Add code
Mar 17, 2023
Figure 1 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 2 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 3 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Figure 4 for MuscleMap: Towards Video-based Activated Muscle Group Estimation
Viaarxiv icon

Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging

Add code
Mar 13, 2023
Figure 1 for Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Figure 2 for Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Figure 3 for Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Figure 4 for Mirror U-Net: Marrying Multimodal Fission with Multi-task Learning for Semantic Segmentation in Medical Imaging
Viaarxiv icon

Guiding the Guidance: A Comparative Analysis of User Guidance Signals for Interactive Segmentation of Volumetric Images

Add code
Mar 13, 2023
Viaarxiv icon

Delivering Arbitrary-Modal Semantic Segmentation

Add code
Mar 02, 2023
Figure 1 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 2 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 3 for Delivering Arbitrary-Modal Semantic Segmentation
Figure 4 for Delivering Arbitrary-Modal Semantic Segmentation
Viaarxiv icon

MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments

Add code
Feb 28, 2023
Figure 1 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 2 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 3 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Figure 4 for MateRobot: Material Recognition in Wearable Robotics for People with Visual Impairments
Viaarxiv icon

Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues

Add code
Jan 24, 2023
Figure 1 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 2 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 3 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Figure 4 for Multimodal Interactive Lung Lesion Segmentation: A Framework for Annotating PET/CT Images based on Physiological and Anatomical Cues
Viaarxiv icon

Uncertainty-aware Vision-based Metric Cross-view Geolocalization

Add code
Nov 22, 2022
Viaarxiv icon

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

Add code
Oct 23, 2022
Figure 1 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 2 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 3 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Figure 4 for Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation
Viaarxiv icon

Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding

Add code
Oct 07, 2022
Figure 1 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 2 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 3 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Figure 4 for Detailed Annotations of Chest X-Rays via CT Projection for Report Understanding
Viaarxiv icon

AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier

Add code
Sep 02, 2022
Figure 1 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 2 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 3 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Figure 4 for AutoPET Challenge: Combining nn-Unet with Swin UNETR Augmented by Maximum Intensity Projection Classifier
Viaarxiv icon