Alert button
Picture for Muhammad Bilal Zafar

Muhammad Bilal Zafar

Alert button

On Early Detection of Hallucinations in Factual Question Answering

Add code
Bookmark button
Alert button
Dec 27, 2023
Ben Snyder, Marius Moisescu, Muhammad Bilal Zafar

Viaarxiv icon

Efficient fair PCA for fair representation learning

Add code
Bookmark button
Alert button
Feb 26, 2023
Matthäus Kleindessner, Michele Donini, Chris Russell, Muhammad Bilal Zafar

Figure 1 for Efficient fair PCA for fair representation learning
Figure 2 for Efficient fair PCA for fair representation learning
Figure 3 for Efficient fair PCA for fair representation learning
Figure 4 for Efficient fair PCA for fair representation learning
Viaarxiv icon

What You Like: Generating Explainable Topical Recommendations for Twitter Using Social Annotations

Add code
Bookmark button
Alert button
Dec 23, 2022
Parantapa Bhattacharya, Saptarshi Ghosh, Muhammad Bilal Zafar, Soumya K. Ghosh, Niloy Ganguly

Figure 1 for What You Like: Generating Explainable Topical Recommendations for Twitter Using Social Annotations
Figure 2 for What You Like: Generating Explainable Topical Recommendations for Twitter Using Social Annotations
Figure 3 for What You Like: Generating Explainable Topical Recommendations for Twitter Using Social Annotations
Figure 4 for What You Like: Generating Explainable Topical Recommendations for Twitter Using Social Annotations
Viaarxiv icon

Diverse Counterfactual Explanations for Anomaly Detection in Time Series

Add code
Bookmark button
Alert button
Mar 21, 2022
Deborah Sulem, Michele Donini, Muhammad Bilal Zafar, Francois-Xavier Aubet, Jan Gasthaus, Tim Januschowski, Sanjiv Das, Krishnaram Kenthapadi, Cedric Archambeau

Figure 1 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 2 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 3 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Figure 4 for Diverse Counterfactual Explanations for Anomaly Detection in Time Series
Viaarxiv icon

More Than Words: Towards Better Quality Interpretations of Text Classifiers

Add code
Bookmark button
Alert button
Dec 23, 2021
Muhammad Bilal Zafar, Philipp Schmidt, Michele Donini, Cédric Archambeau, Felix Biessmann, Sanjiv Ranjan Das, Krishnaram Kenthapadi

Figure 1 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 2 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 3 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Figure 4 for More Than Words: Towards Better Quality Interpretations of Text Classifiers
Viaarxiv icon

Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models

Add code
Bookmark button
Alert button
Dec 13, 2021
David Nigenda, Zohar Karnin, Muhammad Bilal Zafar, Raghu Ramesha, Alan Tan, Michele Donini, Krishnaram Kenthapadi

Figure 1 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 2 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 3 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Figure 4 for Amazon SageMaker Model Monitor: A System for Real-Time Insights into Deployed Machine Learning Models
Viaarxiv icon

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Add code
Bookmark button
Alert button
Sep 07, 2021
Michaela Hardt, Xiaoguang Chen, Xiaoyi Cheng, Michele Donini, Jason Gelman, Satish Gollaprolu, John He, Pedro Larroy, Xinyu Liu, Nick McCarthy, Ashish Rathi, Scott Rees, Ankit Siva, ErhYuan Tsai, Keerthan Vasist, Pinar Yilmaz, Muhammad Bilal Zafar, Sanjiv Das, Kevin Haas, Tyler Hill, Krishnaram Kenthapadi

Figure 1 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 2 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 3 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Figure 4 for Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud
Viaarxiv icon

DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement

Add code
Bookmark button
Alert button
Jul 13, 2021
Umang Bhatt, Isabel Chien, Muhammad Bilal Zafar, Adrian Weller

Figure 1 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 2 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 3 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Figure 4 for DIVINE: Diverse Influential Training Points for Data Visualization and Model Refinement
Viaarxiv icon