Picture for Hiromi Arai

Hiromi Arai

Analyzing Social Biases in Japanese Large Language Models

Add code
Jun 04, 2024
Viaarxiv icon

Will Large-scale Generative Models Corrupt Future Datasets?

Add code
Nov 15, 2022
Figure 1 for Will Large-scale Generative Models Corrupt Future Datasets?
Figure 2 for Will Large-scale Generative Models Corrupt Future Datasets?
Figure 3 for Will Large-scale Generative Models Corrupt Future Datasets?
Figure 4 for Will Large-scale Generative Models Corrupt Future Datasets?
Viaarxiv icon

Characterizing the risk of fairwashing

Add code
Jun 14, 2021
Figure 1 for Characterizing the risk of fairwashing
Figure 2 for Characterizing the risk of fairwashing
Figure 3 for Characterizing the risk of fairwashing
Figure 4 for Characterizing the risk of fairwashing
Viaarxiv icon

Fairwashing: the risk of rationalization

Add code
Jan 28, 2019
Figure 1 for Fairwashing: the risk of rationalization
Figure 2 for Fairwashing: the risk of rationalization
Figure 3 for Fairwashing: the risk of rationalization
Figure 4 for Fairwashing: the risk of rationalization
Viaarxiv icon