Picture for Yuxiang Xiao

Yuxiang Xiao

AdaFusion: Prompt-Guided Inference with Adaptive Fusion of Pathology Foundation Models

Add code
Aug 07, 2025
Viaarxiv icon

GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models

Add code
Jun 20, 2024
Figure 1 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 2 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 3 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Figure 4 for GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models
Viaarxiv icon