Picture for Isar Nejadgholi

Isar Nejadgholi

ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations

Add code
Jun 15, 2023
Figure 1 for ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations
Figure 2 for ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations
Figure 3 for ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations
Figure 4 for ChatGPT for Suicide Risk Assessment on Social Media: Quantitative Evaluation of Model Performance, Potentials and Limitations
Viaarxiv icon

The crime of being poor

Add code
Mar 24, 2023
Figure 1 for The crime of being poor
Figure 2 for The crime of being poor
Figure 3 for The crime of being poor
Figure 4 for The crime of being poor
Viaarxiv icon

A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?

Add code
Feb 14, 2023
Figure 1 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 2 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 3 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Figure 4 for A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the Input is Under-Specified?
Viaarxiv icon

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Add code
Nov 09, 2022
Viaarxiv icon

Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information

Add code
Oct 19, 2022
Figure 1 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 2 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 3 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Figure 4 for Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
Viaarxiv icon

Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models

Add code
Jun 08, 2022
Figure 1 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Figure 2 for Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Viaarxiv icon

Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection

Add code
May 06, 2022
Figure 1 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 2 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 3 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Figure 4 for Necessity and Sufficiency for Explaining Text Classifiers: A Case Study in Hate Speech Detection
Viaarxiv icon

Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors

Add code
Apr 05, 2022
Figure 1 for Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors
Figure 2 for Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors
Figure 3 for Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors
Figure 4 for Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors
Viaarxiv icon

Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model

Add code
Jun 04, 2021
Figure 1 for Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Figure 2 for Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Figure 3 for Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Figure 4 for Understanding and Countering Stereotypes: A Computational Approach to the Stereotype Content Model
Viaarxiv icon

A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning

Add code
May 19, 2021
Figure 1 for A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning
Figure 2 for A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning
Figure 3 for A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning
Figure 4 for A Privacy-Preserving Approach to Extraction of Personal Information through Automatic Annotation and Federated Learning
Viaarxiv icon