Picture for Michael Mock

Michael Mock

Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS Sankt Augustin, Germany

Textual Data Bias Detection and Mitigation -- An Extensible Pipeline with Experimental Evaluation

Add code
Dec 12, 2025
Viaarxiv icon

Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models

Add code
Feb 26, 2025
Figure 1 for Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models
Figure 2 for Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models
Figure 3 for Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models
Figure 4 for Detecting Linguistic Indicators for Stereotype Assessment with Large Language Models
Viaarxiv icon

Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions

Add code
Feb 17, 2025
Figure 1 for Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Figure 2 for Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Figure 3 for Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Figure 4 for Detecting Systematic Weaknesses in Vision Models along Predefined Human-Understandable Dimensions
Viaarxiv icon

Developing trustworthy AI applications with foundation models

Add code
May 08, 2024
Figure 1 for Developing trustworthy AI applications with foundation models
Figure 2 for Developing trustworthy AI applications with foundation models
Figure 3 for Developing trustworthy AI applications with foundation models
Figure 4 for Developing trustworthy AI applications with foundation models
Viaarxiv icon

Assessing Systematic Weaknesses of DNNs using Counterfactuals

Add code
Aug 03, 2023
Figure 1 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 2 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 3 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Figure 4 for Assessing Systematic Weaknesses of DNNs using Counterfactuals
Viaarxiv icon

Using ScrutinAI for Visual Inspection of DNN Performance in a Medical Use Case

Add code
Aug 02, 2023
Viaarxiv icon

Guideline for Trustworthy Artificial Intelligence -- AI Assessment Catalog

Add code
Jun 20, 2023
Viaarxiv icon

Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety

Add code
Apr 29, 2021
Viaarxiv icon

Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities

Add code
Apr 22, 2021
Figure 1 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 2 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 3 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Figure 4 for Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Viaarxiv icon

Communication-Efficient Distributed Online Learning with Kernels

Add code
Nov 28, 2019
Figure 1 for Communication-Efficient Distributed Online Learning with Kernels
Figure 2 for Communication-Efficient Distributed Online Learning with Kernels
Viaarxiv icon