Picture for Xiangjue Dong

Xiangjue Dong

DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management

Add code
May 20, 2025
Viaarxiv icon

Masculine Defaults via Gendered Discourse in Podcasts and Large Language Models

Add code
Apr 15, 2025
Viaarxiv icon

A Survey on LLM Inference-Time Self-Improvement

Add code
Dec 18, 2024
Viaarxiv icon

ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning

Add code
Oct 30, 2024
Viaarxiv icon

Disclosure and Mitigation of Gender Bias in LLMs

Add code
Feb 17, 2024
Figure 1 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 2 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 3 for Disclosure and Mitigation of Gender Bias in LLMs
Figure 4 for Disclosure and Mitigation of Gender Bias in LLMs
Viaarxiv icon

The Neglected Tails of Vision-Language Models

Add code
Feb 02, 2024
Figure 1 for The Neglected Tails of Vision-Language Models
Figure 2 for The Neglected Tails of Vision-Language Models
Figure 3 for The Neglected Tails of Vision-Language Models
Figure 4 for The Neglected Tails of Vision-Language Models
Viaarxiv icon

DALA: A Distribution-Aware LoRA-Based Adversarial Attack against Pre-trained Language Models

Add code
Nov 14, 2023
Viaarxiv icon

Probing Explicit and Implicit Gender Bias through LLM Conditional Text Generation

Add code
Nov 01, 2023
Viaarxiv icon

Co$^2$PT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning

Add code
Oct 19, 2023
Viaarxiv icon

Everything Perturbed All at Once: Enabling Differentiable Graph Attacks

Add code
Aug 29, 2023
Figure 1 for Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Figure 2 for Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Figure 3 for Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Figure 4 for Everything Perturbed All at Once: Enabling Differentiable Graph Attacks
Viaarxiv icon