Alert button
Picture for Genevieve Fried

Genevieve Fried

Alert button

About Face: A Survey of Facial Recognition Evaluation

Feb 01, 2021
Inioluwa Deborah Raji, Genevieve Fried

Figure 1 for About Face: A Survey of Facial Recognition Evaluation
Figure 2 for About Face: A Survey of Facial Recognition Evaluation
Figure 3 for About Face: A Survey of Facial Recognition Evaluation
Figure 4 for About Face: A Survey of Facial Recognition Evaluation

We survey over 100 face datasets constructed between 1976 to 2019 of 145 million images of over 17 million subjects from a range of sources, demographics and conditions. Our historical survey reveals that these datasets are contextually informed, shaped by changes in political motivations, technological capability and current norms. We discuss how such influences mask specific practices (some of which may actually be harmful or otherwise problematic) and make a case for the explicit communication of such details in order to establish a more grounded understanding of the technology's function in the real world.

* Presented at AAAI 2020 Workshop on AI Evaluation 
Viaarxiv icon

Ethical Challenges in Data-Driven Dialogue Systems

Nov 24, 2017
Peter Henderson, Koustuv Sinha, Nicolas Angelard-Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, Joelle Pineau

Figure 1 for Ethical Challenges in Data-Driven Dialogue Systems
Figure 2 for Ethical Challenges in Data-Driven Dialogue Systems
Figure 3 for Ethical Challenges in Data-Driven Dialogue Systems
Figure 4 for Ethical Challenges in Data-Driven Dialogue Systems

The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.

* In Submission to the AAAI/ACM conference on Artificial Intelligence, Ethics, and Society 
Viaarxiv icon