Alert button
Picture for Rainer Stiefelhagen

Rainer Stiefelhagen

Alert button

Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation

Add code
Bookmark button
Alert button
Jul 27, 2022
Jiaming Zhang, Kailun Yang, Hao Shi, Simon Reiß, Kunyu Peng, Chaoxiang Ma, Haodong Fu, Kaiwei Wang, Rainer Stiefelhagen

Figure 1 for Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation
Figure 2 for Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation
Figure 3 for Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation
Figure 4 for Behind Every Domain There is a Shift: Adapting Distortion-aware Vision Transformers for Panoramic Semantic Segmentation
Viaarxiv icon

Trans4Map: Revisiting Holistic Top-down Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers

Add code
Bookmark button
Alert button
Jul 13, 2022
Chang Chen, Jiaming Zhang, Kailun Yang, Kunyu Peng, Rainer Stiefelhagen

Figure 1 for Trans4Map: Revisiting Holistic Top-down Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers
Figure 2 for Trans4Map: Revisiting Holistic Top-down Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers
Figure 3 for Trans4Map: Revisiting Holistic Top-down Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers
Figure 4 for Trans4Map: Revisiting Holistic Top-down Mapping from Egocentric Images to Allocentric Semantics with Vision Transformers
Viaarxiv icon

Multi-modal Depression Estimation based on Sub-attentional Fusion

Add code
Bookmark button
Alert button
Jul 13, 2022
Ping-Cheng Wei, Kunyu Peng, Alina Roitberg, Kailun Yang, Jiaming Zhang, Rainer Stiefelhagen

Figure 1 for Multi-modal Depression Estimation based on Sub-attentional Fusion
Figure 2 for Multi-modal Depression Estimation based on Sub-attentional Fusion
Figure 3 for Multi-modal Depression Estimation based on Sub-attentional Fusion
Figure 4 for Multi-modal Depression Estimation based on Sub-attentional Fusion
Viaarxiv icon

Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning

Add code
Bookmark button
Alert button
Jun 21, 2022
Alexander Jaus, Kailun Yang, Rainer Stiefelhagen

Figure 1 for Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
Figure 2 for Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
Figure 3 for Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
Figure 4 for Panoramic Panoptic Segmentation: Insights Into Surrounding Parsing for Mobile Agents via Unsupervised Contrastive Learning
Viaarxiv icon

Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training

Add code
Bookmark button
Alert button
May 14, 2022
Constantin Seibold, Simon Reiß, M. Saquib Sarfraz, Rainer Stiefelhagen, Jens Kleesiek

Figure 1 for Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training
Figure 2 for Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training
Figure 3 for Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training
Figure 4 for Breaking with Fixed Set Pathology Recognition through Report-Guided Contrastive Training
Viaarxiv icon

Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data

Add code
Bookmark button
Alert button
Apr 29, 2022
Lukas Scholch, Jonas Steinhauser, Maximilian Beichter, Constantin Seibold, Kailun Yang, Merlin Knäble, Thorsten Schwarz, Alexander Mädche, Rainer Stiefelhagen

Figure 1 for Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data
Figure 2 for Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data
Figure 3 for Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data
Figure 4 for Towards Automatic Parsing of Structured Visual Content through the Use of Synthetic Data
Viaarxiv icon

CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers

Add code
Bookmark button
Alert button
Apr 12, 2022
Huayao Liu, Jiaming Zhang, Kailun Yang, Xinxin Hu, Rainer Stiefelhagen

Figure 1 for CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers
Figure 2 for CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers
Figure 3 for CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers
Figure 4 for CMX: Cross-Modal Fusion for RGB-X Semantic Segmentation with Transformers
Viaarxiv icon

Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction

Add code
Bookmark button
Alert button
Apr 11, 2022
M. Saquib Sarfraz, Marios Koulakis, Constantin Seibold, Rainer Stiefelhagen

Figure 1 for Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction
Figure 2 for Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction
Figure 3 for Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction
Figure 4 for Hierarchical Nearest Neighbor Graph Embedding for Efficient Dimensionality Reduction
Viaarxiv icon

A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding

Add code
Bookmark button
Alert button
Apr 10, 2022
Alina Roitberg, Kunyu Peng, Zdravko Marinov, Constantin Seibold, David Schneider, Rainer Stiefelhagen

Figure 1 for A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding
Figure 2 for A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding
Figure 3 for A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding
Figure 4 for A Comparative Analysis of Decision-Level Fusion for Multimodal Driver Behaviour Understanding
Viaarxiv icon

Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates

Add code
Bookmark button
Alert button
Apr 10, 2022
Alina Roitberg, Kunyu Peng, David Schneider, Kailun Yang, Marios Koulakis, Manuel Martinez, Rainer Stiefelhagen

Figure 1 for Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates
Figure 2 for Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates
Figure 3 for Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates
Figure 4 for Is my Driver Observation Model Overconfident? Input-guided Calibration Networks for Reliable and Interpretable Confidence Estimates
Viaarxiv icon