Picture for Guiyao Tie

Guiyao Tie

VCE: A zero-cost hallucination mitigation method of LVLMs via visual contrastive editing

Add code
Apr 21, 2026
Viaarxiv icon

EmbodiedClaw: Conversational Workflow Execution for Embodied AI Development

Add code
Apr 15, 2026
Viaarxiv icon

BadSkill: Backdoor Attacks on Agent Skills via Model-in-Skill Poisoning

Add code
Apr 10, 2026
Viaarxiv icon

A Survey of AI Scientists: Surveying the automatic Scientists and Research

Add code
Oct 27, 2025
Viaarxiv icon

Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios

Add code
May 23, 2025
Figure 1 for Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios
Figure 2 for Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios
Figure 3 for Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios
Figure 4 for Automating Safety Enhancement for LLM-based Agents with Synthetic Risk Scenarios
Viaarxiv icon

BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization

Add code
May 22, 2025
Figure 1 for BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization
Figure 2 for BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization
Figure 3 for BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization
Figure 4 for BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization
Viaarxiv icon

MMMR: Benchmarking Massive Multi-Modal Reasoning Tasks

Add code
May 22, 2025
Viaarxiv icon

Large Reasoning Models in Agent Scenarios: Exploring the Necessity of Reasoning Capabilities

Add code
Mar 14, 2025
Viaarxiv icon

Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation

Add code
Mar 08, 2025
Viaarxiv icon

A Survey on Post-training of Large Language Models

Add code
Mar 08, 2025
Viaarxiv icon