Picture for Buru Chang

Buru Chang

HarDBench: A Benchmark for Draft-Based Co-Authoring Jailbreak Attacks for Safe Human-LLM Collaborative Writing

Add code
Apr 21, 2026
Viaarxiv icon

NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models

Add code
Nov 09, 2025
Figure 1 for NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
Figure 2 for NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
Figure 3 for NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
Figure 4 for NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
Viaarxiv icon

Dataset Cartography for Large Language Model Alignment: Mapping and Diagnosing Preference Data

Add code
May 29, 2025
Viaarxiv icon

The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems

Add code
Feb 28, 2025
Figure 1 for The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems
Figure 2 for The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems
Figure 3 for The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems
Figure 4 for The RAG Paradox: A Black-Box Attack Exploiting Unintentional Vulnerabilities in Retrieval-Augmented Generation Systems
Viaarxiv icon

In-Context Learning with Noisy Labels

Add code
Nov 29, 2024
Viaarxiv icon

Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning

Add code
Nov 24, 2024
Figure 1 for Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning
Figure 2 for Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning
Figure 3 for Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning
Figure 4 for Is 'Right' Right? Enhancing Object Orientation Understanding in Multimodal Language Models through Egocentric Instruction Tuning
Viaarxiv icon

SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script

Add code
Oct 28, 2024
Figure 1 for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script
Figure 2 for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script
Figure 3 for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script
Figure 4 for SHARE: Shared Memory-Aware Open-Domain Long-Term Dialogue Dataset Constructed from Movie Script
Viaarxiv icon

ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models

Add code
Aug 25, 2024
Figure 1 for ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Figure 2 for ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Figure 3 for ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Figure 4 for ConVis: Contrastive Decoding with Hallucination Visualization for Mitigating Hallucinations in Multimodal Large Language Models
Viaarxiv icon

Review-driven Personalized Preference Reasoning with Large Language Models for Recommendation

Add code
Aug 13, 2024
Viaarxiv icon

Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models

Add code
Mar 26, 2024
Viaarxiv icon