Picture for Yuekang Li

Yuekang Li

Help or Hurdle? Rethinking Model Context Protocol-Augmented Large Language Models

Add code
Aug 18, 2025
Viaarxiv icon

"Pull or Not to Pull?'': Investigating Moral Biases in Leading Large Language Models Across Ethical Dilemmas

Add code
Aug 10, 2025
Viaarxiv icon

Beyond Uniform Criteria: Scenario-Adaptive Multi-Dimensional Jailbreak Evaluation

Add code
Aug 08, 2025
Viaarxiv icon

A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories

Add code
May 02, 2025
Viaarxiv icon

Good News for Script Kiddies? Evaluating Large Language Models for Automated Exploit Generation

Add code
May 02, 2025
Viaarxiv icon

Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning

Add code
Feb 19, 2025
Viaarxiv icon

Indiana Jones: There Are Always Some Useful Ancient Relics

Add code
Jan 27, 2025
Figure 1 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 2 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 3 for Indiana Jones: There Are Always Some Useful Ancient Relics
Figure 4 for Indiana Jones: There Are Always Some Useful Ancient Relics
Viaarxiv icon

Image-Based Geolocation Using Large Vision-Language Models

Add code
Aug 18, 2024
Viaarxiv icon

Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

Add code
Jul 16, 2024
Figure 1 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 2 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 3 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 4 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Viaarxiv icon

Source Code Summarization in the Era of Large Language Models

Add code
Jul 09, 2024
Viaarxiv icon