Alert button
Picture for Fred Morstatter

Fred Morstatter

Alert button

Attributing Fair Decisions with Attention Interventions

Add code
Bookmark button
Alert button
Sep 08, 2021
Ninareh Mehrabi, Umang Gupta, Fred Morstatter, Greg Ver Steeg, Aram Galstyan

Figure 1 for Attributing Fair Decisions with Attention Interventions
Figure 2 for Attributing Fair Decisions with Attention Interventions
Figure 3 for Attributing Fair Decisions with Attention Interventions
Figure 4 for Attributing Fair Decisions with Attention Interventions
Viaarxiv icon

Analyzing Race and Country of Citizenship Bias in Wikidata

Add code
Bookmark button
Alert button
Aug 11, 2021
Zaina Shaik, Filip Ilievski, Fred Morstatter

Figure 1 for Analyzing Race and Country of Citizenship Bias in Wikidata
Figure 2 for Analyzing Race and Country of Citizenship Bias in Wikidata
Figure 3 for Analyzing Race and Country of Citizenship Bias in Wikidata
Viaarxiv icon

Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources

Add code
Bookmark button
Alert button
Mar 21, 2021
Ninareh Mehrabi, Pei Zhou, Fred Morstatter, Jay Pujara, Xiang Ren, Aram Galstyan

Figure 1 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 2 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 3 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Figure 4 for Lawyers are Dishonest? Quantifying Representational Harms in Commonsense Knowledge Resources
Viaarxiv icon

Exacerbating Algorithmic Bias through Fairness Attacks

Add code
Bookmark button
Alert button
Dec 16, 2020
Ninareh Mehrabi, Muhammad Naveed, Fred Morstatter, Aram Galstyan

Figure 1 for Exacerbating Algorithmic Bias through Fairness Attacks
Figure 2 for Exacerbating Algorithmic Bias through Fairness Attacks
Figure 3 for Exacerbating Algorithmic Bias through Fairness Attacks
Figure 4 for Exacerbating Algorithmic Bias through Fairness Attacks
Viaarxiv icon

One-shot Learning for Temporal Knowledge Graphs

Add code
Bookmark button
Alert button
Oct 23, 2020
Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan

Figure 1 for One-shot Learning for Temporal Knowledge Graphs
Figure 2 for One-shot Learning for Temporal Knowledge Graphs
Figure 3 for One-shot Learning for Temporal Knowledge Graphs
Figure 4 for One-shot Learning for Temporal Knowledge Graphs
Viaarxiv icon

Leveraging Clickstream Trajectories to Reveal Low-Quality Workers in Crowdsourced Forecasting Platforms

Add code
Bookmark button
Alert button
Sep 04, 2020
Akira Matsui, Emilio Ferrara, Fred Morstatter, Andres Abeliuk, Aram Galstyan

Figure 1 for Leveraging Clickstream Trajectories to Reveal Low-Quality Workers in Crowdsourced Forecasting Platforms
Figure 2 for Leveraging Clickstream Trajectories to Reveal Low-Quality Workers in Crowdsourced Forecasting Platforms
Figure 3 for Leveraging Clickstream Trajectories to Reveal Low-Quality Workers in Crowdsourced Forecasting Platforms
Figure 4 for Leveraging Clickstream Trajectories to Reveal Low-Quality Workers in Crowdsourced Forecasting Platforms
Viaarxiv icon

Statistical Equity: A Fairness Classification Objective

Add code
Bookmark button
Alert button
May 14, 2020
Ninareh Mehrabi, Yuzhong Huang, Fred Morstatter

Figure 1 for Statistical Equity: A Fairness Classification Objective
Figure 2 for Statistical Equity: A Fairness Classification Objective
Figure 3 for Statistical Equity: A Fairness Classification Objective
Figure 4 for Statistical Equity: A Fairness Classification Objective
Viaarxiv icon

Identifying Cultural Differences through Multi-Lingual Wikipedia

Add code
Bookmark button
Alert button
Apr 10, 2020
Yufei Tian, Tuhin Chakrabarty, Fred Morstatter, Nanyun Peng

Figure 1 for Identifying Cultural Differences through Multi-Lingual Wikipedia
Figure 2 for Identifying Cultural Differences through Multi-Lingual Wikipedia
Figure 3 for Identifying Cultural Differences through Multi-Lingual Wikipedia
Figure 4 for Identifying Cultural Differences through Multi-Lingual Wikipedia
Viaarxiv icon

Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification

Add code
Bookmark button
Alert button
Apr 04, 2020
Caleb Ziems, Ymir Vigfusson, Fred Morstatter

Figure 1 for Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification
Figure 2 for Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification
Figure 3 for Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification
Figure 4 for Aggressive, Repetitive, Intentional, Visible, and Imbalanced: Refining Representations for Cyberbullying Classification
Viaarxiv icon

Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition

Add code
Bookmark button
Alert button
Oct 24, 2019
Ninareh Mehrabi, Thamme Gowda, Fred Morstatter, Nanyun Peng, Aram Galstyan

Figure 1 for Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Figure 2 for Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Figure 3 for Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Figure 4 for Man is to Person as Woman is to Location: Measuring Gender Bias in Named Entity Recognition
Viaarxiv icon