Picture for Kening Zheng

Kening Zheng

Towards Robust LLM Post-Training: Automatic Failure Management for Reinforcement Fine-Tuning

Add code
May 06, 2026
Viaarxiv icon

Unveiling Language Routing Isolation in Multilingual MoE Models for Interpretable Subnetwork Adaptation

Add code
Apr 04, 2026
Viaarxiv icon

EvoSkills: Self-Evolving Agent Skills via Co-Evolutionary Verification

Add code
Apr 02, 2026
Viaarxiv icon

When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

Add code
Apr 01, 2026
Viaarxiv icon

Unlocking Multimodal Document Intelligence: From Current Triumphs to Future Frontiers of Visual Document Retrieval

Add code
Feb 23, 2026
Viaarxiv icon

GM-PRM: A Generative Multimodal Process Reward Model for Multimodal Mathematical Reasoning

Add code
Aug 06, 2025
Viaarxiv icon

SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning

Add code
Feb 18, 2025
Figure 1 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 2 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 3 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 4 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Viaarxiv icon

Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models

Add code
Oct 04, 2024
Figure 1 for Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Figure 2 for Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Figure 3 for Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Figure 4 for Look Twice Before You Answer: Memory-Space Visual Retracing for Hallucination Mitigation in Multimodal Large Language Models
Viaarxiv icon

Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models

Add code
Aug 18, 2024
Figure 1 for Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models
Figure 2 for Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models
Figure 3 for Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models
Figure 4 for Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models
Viaarxiv icon

Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities

Add code
Jun 18, 2024
Figure 1 for Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities
Figure 2 for Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities
Figure 3 for Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities
Figure 4 for Refiner: Restructure Retrieval Content Efficiently to Advance Question-Answering Capabilities
Viaarxiv icon