Picture for Haomin Zhuang

Haomin Zhuang

Guardian-as-an-Advisor: Advancing Next-Generation Guardian Models for Trustworthy LLMs

Add code
Apr 08, 2026
Viaarxiv icon

Reliable Control-Point Selection for Steering Reasoning in Large Language Models

Add code
Apr 02, 2026
Viaarxiv icon

Dual Optimal: Make Your LLM Peer-like with Dignity

Add code
Apr 02, 2026
Viaarxiv icon

SenseMath: Do LLMs Have Number Sense? Evaluating Shortcut Use, Judgment, and Generation

Add code
Apr 02, 2026
Viaarxiv icon

Emergent Social Intelligence Risks in Generative Multi-Agent Systems

Add code
Mar 29, 2026
Viaarxiv icon

Seeing the Invisible: Machine learning-Based QPI Kernel Extraction via Latent Alignment

Add code
Jun 05, 2025
Viaarxiv icon

Dissecting Logical Reasoning in LLMs: A Fine-Grained Evaluation and Supervision Study

Add code
Jun 05, 2025
Viaarxiv icon

SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models

Add code
May 29, 2025
Figure 1 for SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
Figure 2 for SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
Figure 3 for SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
Figure 4 for SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models
Viaarxiv icon

Beyond Single-Value Metrics: Evaluating and Enhancing LLM Unlearning with Cognitive Diagnosis

Add code
Feb 19, 2025
Viaarxiv icon

UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS

Add code
Nov 27, 2024
Figure 1 for UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS
Figure 2 for UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS
Figure 3 for UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS
Figure 4 for UOE: Unlearning One Expert Is Enough For Mixture-of-experts LLMS
Viaarxiv icon