Picture for Zekun Wu

Zekun Wu

Saarland University, Germany

Intelligent support for Human Oversight: Integrating Reinforcement Learning with Gaze Simulation to Personalize Highlighting

Add code
Feb 09, 2026
Viaarxiv icon

The Confidence Manifold: Geometric Structure of Correctness Representations in Language Models

Add code
Feb 08, 2026
Viaarxiv icon

Mind the Gap: Evaluating Model- and Agentic-Level Vulnerabilities in LLMs with Action Graphs

Add code
Sep 05, 2025
Viaarxiv icon

Personality as a Probe for LLM Evaluation: Method Trade-offs and Downstream Effects

Add code
Sep 05, 2025
Figure 1 for Personality as a Probe for LLM Evaluation: Method Trade-offs and Downstream Effects
Figure 2 for Personality as a Probe for LLM Evaluation: Method Trade-offs and Downstream Effects
Figure 3 for Personality as a Probe for LLM Evaluation: Method Trade-offs and Downstream Effects
Figure 4 for Personality as a Probe for LLM Evaluation: Method Trade-offs and Downstream Effects
Viaarxiv icon

Knowledge Collapse in LLMs: When Fluency Survives but Facts Fail under Recursive Synthetic Training

Add code
Sep 05, 2025
Viaarxiv icon

CorrSteer: Steering Improves Task Performance and Safety in LLMs through Correlation-based Sparse Autoencoder Feature Selection

Add code
Aug 18, 2025
Viaarxiv icon

MPF: Aligning and Debiasing Language Models post Deployment via Multi Perspective Fusion

Add code
Jul 03, 2025
Viaarxiv icon

Hunyuan-TurboS: Advancing Large Language Models through Mamba-Transformer Synergy and Adaptive Chain-of-Thought

Add code
May 21, 2025
Viaarxiv icon

LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries

Add code
May 13, 2025
Figure 1 for LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries
Figure 2 for LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries
Figure 3 for LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries
Figure 4 for LibVulnWatch: A Deep Assessment Agent System and Leaderboard for Uncovering Hidden Vulnerabilities in Open-Source AI Libraries
Viaarxiv icon

Bias Amplification: Language Models as Increasingly Biased Media

Add code
Oct 19, 2024
Viaarxiv icon