Picture for Yuekang Li

Yuekang Li

Malicious Agent Skills in the Wild: A Large-Scale Security Empirical Study

Add code
Feb 06, 2026
Viaarxiv icon

Agent Skills in the Wild: An Empirical Study of Security Vulnerabilities at Scale

Add code
Jan 15, 2026
Viaarxiv icon

Robust CAPTCHA Using Audio Illusions in the Era of Large Language Models: from Evaluation to Advances

Add code
Jan 13, 2026
Viaarxiv icon

DiverseClaire: Simulating Students to Improve Introductory Programming Course Materials for All CS1 Learners

Add code
Nov 18, 2025
Viaarxiv icon

Help or Hurdle? Rethinking Model Context Protocol-Augmented Large Language Models

Add code
Aug 18, 2025
Viaarxiv icon

"Pull or Not to Pull?'': Investigating Moral Biases in Leading Large Language Models Across Ethical Dilemmas

Add code
Aug 10, 2025
Viaarxiv icon

Beyond Uniform Criteria: Scenario-Adaptive Multi-Dimensional Jailbreak Evaluation

Add code
Aug 08, 2025
Viaarxiv icon

A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories

Add code
May 02, 2025
Figure 1 for A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories
Figure 2 for A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories
Viaarxiv icon

Good News for Script Kiddies? Evaluating Large Language Models for Automated Exploit Generation

Add code
May 02, 2025
Viaarxiv icon

Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning

Add code
Feb 19, 2025
Figure 1 for Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning
Figure 2 for Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning
Figure 3 for Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning
Figure 4 for Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning
Viaarxiv icon