Picture for Yuekang Li

Yuekang Li

Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

Add code
Jul 16, 2024
Figure 1 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 2 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 3 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Figure 4 for Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models
Viaarxiv icon

Source Code Summarization in the Era of Large Language Models

Add code
Jul 09, 2024
Viaarxiv icon

Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation

Add code
May 20, 2024
Figure 1 for Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Figure 2 for Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Figure 3 for Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Figure 4 for Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Viaarxiv icon

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

Add code
Apr 19, 2024
Viaarxiv icon

LLM Jailbreak Attack versus Defense Techniques -- A Comprehensive Study

Add code
Feb 21, 2024
Viaarxiv icon

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Add code
Jan 01, 2024
Viaarxiv icon

ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers

Add code
Aug 30, 2023
Figure 1 for ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers
Figure 2 for ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers
Figure 3 for ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers
Figure 4 for ASTER: Automatic Speech Recognition System Accessibility Testing for Stutterers
Viaarxiv icon

Prompt Injection attack against LLM-integrated Applications

Add code
Jun 08, 2023
Figure 1 for Prompt Injection attack against LLM-integrated Applications
Figure 2 for Prompt Injection attack against LLM-integrated Applications
Figure 3 for Prompt Injection attack against LLM-integrated Applications
Figure 4 for Prompt Injection attack against LLM-integrated Applications
Viaarxiv icon

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

Add code
May 23, 2023
Figure 1 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 2 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 3 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Figure 4 for Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study
Viaarxiv icon

Automatic Code Summarization via ChatGPT: How Far Are We?

Add code
May 22, 2023
Figure 1 for Automatic Code Summarization via ChatGPT: How Far Are We?
Figure 2 for Automatic Code Summarization via ChatGPT: How Far Are We?
Figure 3 for Automatic Code Summarization via ChatGPT: How Far Are We?
Figure 4 for Automatic Code Summarization via ChatGPT: How Far Are We?
Viaarxiv icon