Alert button
Picture for Ayanna Howard

Ayanna Howard

Alert button

Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework

Add code
Bookmark button
Alert button
Sep 27, 2021
Matan Halevy, Camille Harris, Amy Bruckman, Diyi Yang, Ayanna Howard

Figure 1 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 2 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 3 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Figure 4 for Mitigating Racial Biases in Toxic Language Detection with an Equity-Based Ensemble Framework
Viaarxiv icon

A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play

Add code
Bookmark button
Alert button
Jun 10, 2020
Shray Bansal, Jin Xu, Ayanna Howard, Charles Isbell

Figure 1 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 2 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 3 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Figure 4 for A Bayesian Framework for Nash Equilibrium Inference in Human-Robot Parallel Play
Viaarxiv icon

Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study

Add code
Bookmark button
Alert button
Jul 03, 2018
Tobi Ogunyale, De'Aira Bryant, Ayanna Howard

Figure 1 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 2 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 3 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Figure 4 for Does Removing Stereotype Priming Remove Bias? A Pilot Human-Robot Interaction Study
Viaarxiv icon