Picture for Xuefeng Du

Xuefeng Du

How Reliable Is Human Feedback For Aligning Large Language Models?

Add code
Oct 02, 2024
Viaarxiv icon

HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection

Add code
Sep 26, 2024
Viaarxiv icon

Out-of-Distribution Learning with Human Feedback

Add code
Aug 14, 2024
Viaarxiv icon

When and How Does In-Distribution Label Help Out-of-Distribution Detection?

Add code
May 28, 2024
Viaarxiv icon

The Ghanaian NLP Landscape: A First Look

Add code
May 10, 2024
Viaarxiv icon

How Does Unlabeled Data Provably Help Out-of-Distribution Detection?

Add code
Feb 05, 2024
Viaarxiv icon

Dream the Impossible: Outlier Imagination with Diffusion Models

Add code
Sep 23, 2023
Viaarxiv icon

OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection

Add code
Jun 17, 2023
Viaarxiv icon

Feed Two Birds with One Scone: Exploiting Wild Data for Both Out-of-Distribution Generalization and Detection

Add code
Jun 15, 2023
Viaarxiv icon

Non-Parametric Outlier Synthesis

Add code
Mar 06, 2023
Viaarxiv icon