Picture for Jan Hendrik Metzen

Jan Hendrik Metzen

Object-Focused Data Selection for Dense Prediction Tasks

Add code
Dec 13, 2024
Figure 1 for Object-Focused Data Selection for Dense Prediction Tasks
Figure 2 for Object-Focused Data Selection for Dense Prediction Tasks
Figure 3 for Object-Focused Data Selection for Dense Prediction Tasks
Figure 4 for Object-Focused Data Selection for Dense Prediction Tasks
Viaarxiv icon

Retinex-Diffusion: On Controlling Illumination Conditions in Diffusion Models via Retinex Theory

Add code
Jul 29, 2024
Viaarxiv icon

Label-free Neural Semantic Image Synthesis

Add code
Jul 01, 2024
Figure 1 for Label-free Neural Semantic Image Synthesis
Figure 2 for Label-free Neural Semantic Image Synthesis
Figure 3 for Label-free Neural Semantic Image Synthesis
Figure 4 for Label-free Neural Semantic Image Synthesis
Viaarxiv icon

Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data

Add code
Apr 25, 2024
Figure 1 for Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
Figure 2 for Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
Figure 3 for Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
Figure 4 for Zero-Shot Distillation for Image Encoders: How to Make Effective Use of Synthetic Data
Viaarxiv icon

Identification of Fine-grained Systematic Errors via Controlled Scene Generation

Add code
Apr 10, 2024
Figure 1 for Identification of Fine-grained Systematic Errors via Controlled Scene Generation
Figure 2 for Identification of Fine-grained Systematic Errors via Controlled Scene Generation
Figure 3 for Identification of Fine-grained Systematic Errors via Controlled Scene Generation
Figure 4 for Identification of Fine-grained Systematic Errors via Controlled Scene Generation
Viaarxiv icon

AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models

Add code
Sep 29, 2023
Figure 1 for AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Figure 2 for AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Figure 3 for AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Figure 4 for AutoCLIP: Auto-tuning Zero-Shot Classifiers for Vision-Language Models
Viaarxiv icon

Identifying Systematic Errors in Object Detectors with the SCROD Pipeline

Add code
Sep 23, 2023
Figure 1 for Identifying Systematic Errors in Object Detectors with the SCROD Pipeline
Figure 2 for Identifying Systematic Errors in Object Detectors with the SCROD Pipeline
Figure 3 for Identifying Systematic Errors in Object Detectors with the SCROD Pipeline
Figure 4 for Identifying Systematic Errors in Object Detectors with the SCROD Pipeline
Viaarxiv icon

Identification of Systematic Errors of Image Classifiers on Rare Subgroups

Add code
Mar 09, 2023
Figure 1 for Identification of Systematic Errors of Image Classifiers on Rare Subgroups
Figure 2 for Identification of Systematic Errors of Image Classifiers on Rare Subgroups
Figure 3 for Identification of Systematic Errors of Image Classifiers on Rare Subgroups
Figure 4 for Identification of Systematic Errors of Image Classifiers on Rare Subgroups
Viaarxiv icon

Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation

Add code
Sep 13, 2022
Figure 1 for Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
Figure 2 for Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
Figure 3 for Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
Figure 4 for Certified Defences Against Adversarial Patch Attacks on Semantic Segmentation
Viaarxiv icon

Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness

Add code
Mar 25, 2022
Figure 1 for Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Figure 2 for Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Figure 3 for Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Figure 4 for Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Viaarxiv icon