Picture for Lu Lin

Lu Lin

MemCollab: Cross-Agent Memory Collaboration via Contrastive Trajectory Distillation

Add code
Mar 24, 2026
Viaarxiv icon

Mitigating topology biases in Graph Diffusion via Counterfactual Intervention

Add code
Mar 02, 2026
Viaarxiv icon

PreFlect: From Retrospective to Prospective Reflection in Large Language Model Agents

Add code
Feb 06, 2026
Viaarxiv icon

Exposing Vulnerabilities in Explanation for Time Series Classifiers via Dual-Target Attacks

Add code
Feb 02, 2026
Viaarxiv icon

Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time

Add code
Sep 15, 2025
Figure 1 for Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time
Figure 2 for Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time
Figure 3 for Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time
Figure 4 for Phi: Preference Hijacking in Multi-modal Large Language Models at Inference Time
Viaarxiv icon

Topological Structure Learning Should Be A Research Priority for LLM-Based Multi-Agent Systems

Add code
May 29, 2025
Viaarxiv icon

Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation

Add code
Mar 05, 2025
Figure 1 for Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation
Figure 2 for Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation
Figure 3 for Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation
Figure 4 for Monitoring Decoding: Mitigating Hallucination via Evaluating the Factuality of Partial Response during Generation
Viaarxiv icon

GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors

Add code
Mar 05, 2025
Figure 1 for GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors
Figure 2 for GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors
Figure 3 for GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors
Figure 4 for GuardDoor: Safeguarding Against Malicious Diffusion Editing via Protective Backdoors
Viaarxiv icon

Understanding and Rectifying Safety Perception Distortion in VLMs

Add code
Feb 18, 2025
Figure 1 for Understanding and Rectifying Safety Perception Distortion in VLMs
Figure 2 for Understanding and Rectifying Safety Perception Distortion in VLMs
Figure 3 for Understanding and Rectifying Safety Perception Distortion in VLMs
Figure 4 for Understanding and Rectifying Safety Perception Distortion in VLMs
Viaarxiv icon

FairCode: Evaluating Social Bias of LLMs in Code Generation

Add code
Jan 09, 2025
Figure 1 for FairCode: Evaluating Social Bias of LLMs in Code Generation
Figure 2 for FairCode: Evaluating Social Bias of LLMs in Code Generation
Figure 3 for FairCode: Evaluating Social Bias of LLMs in Code Generation
Figure 4 for FairCode: Evaluating Social Bias of LLMs in Code Generation
Viaarxiv icon