Picture for Pranay Lohia

Pranay Lohia

High Significant Fault Detection in Azure Core Workload Insights

Add code
Apr 14, 2024
Figure 1 for High Significant Fault Detection in Azure Core Workload Insights
Figure 2 for High Significant Fault Detection in Azure Core Workload Insights
Figure 3 for High Significant Fault Detection in Azure Core Workload Insights
Figure 4 for High Significant Fault Detection in Azure Core Workload Insights
Viaarxiv icon

Counterfactual Multi-Token Fairness in Text Classification

Add code
Feb 09, 2022
Figure 1 for Counterfactual Multi-Token Fairness in Text Classification
Figure 2 for Counterfactual Multi-Token Fairness in Text Classification
Figure 3 for Counterfactual Multi-Token Fairness in Text Classification
Figure 4 for Counterfactual Multi-Token Fairness in Text Classification
Viaarxiv icon

Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets

Add code
Sep 05, 2021
Figure 1 for Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets
Figure 2 for Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets
Figure 3 for Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets
Figure 4 for Data Quality Toolkit: Automatic assessment of data quality and remediation for machine learning datasets
Viaarxiv icon

Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness

Add code
Jan 31, 2021
Figure 1 for Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness
Figure 2 for Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness
Figure 3 for Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness
Figure 4 for Priority-based Post-Processing Bias Mitigation for Individual and Group Fairness
Viaarxiv icon

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Add code
Oct 03, 2018
Figure 1 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 2 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 3 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Figure 4 for AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
Viaarxiv icon

Automated Test Generation to Detect Individual Discrimination in AI Models

Add code
Sep 10, 2018
Figure 1 for Automated Test Generation to Detect Individual Discrimination in AI Models
Figure 2 for Automated Test Generation to Detect Individual Discrimination in AI Models
Figure 3 for Automated Test Generation to Detect Individual Discrimination in AI Models
Figure 4 for Automated Test Generation to Detect Individual Discrimination in AI Models
Viaarxiv icon

Video Analysis of "YouTube Funnies" to Aid the Study of Human Gait and Falls - Preliminary Results and Proof of Concept

Add code
Oct 26, 2016
Figure 1 for Video Analysis of "YouTube Funnies" to Aid the Study of Human Gait and Falls - Preliminary Results and Proof of Concept
Figure 2 for Video Analysis of "YouTube Funnies" to Aid the Study of Human Gait and Falls - Preliminary Results and Proof of Concept
Figure 3 for Video Analysis of "YouTube Funnies" to Aid the Study of Human Gait and Falls - Preliminary Results and Proof of Concept
Figure 4 for Video Analysis of "YouTube Funnies" to Aid the Study of Human Gait and Falls - Preliminary Results and Proof of Concept
Viaarxiv icon