Alert button
Picture for Kailong Wang

Kailong Wang

Alert button

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

Add code
Bookmark button
Alert button
Apr 19, 2024
Yuxi Li, Yi Liu, Gelei Deng, Ying Zhang, Wenjia Song, Ling Shi, Kailong Wang, Yuekang Li, Yang Liu, Haoyu Wang

Viaarxiv icon

Beyond Fidelity: Explaining Vulnerability Localization of Learning-based Detectors

Add code
Bookmark button
Alert button
Jan 05, 2024
Baijun Cheng, Shengming Zhao, Kailong Wang, Meizhen Wang, Guangdong Bai, Ruitao Feng, Yao Guo, Lei Ma, Haoyu Wang

Viaarxiv icon

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Add code
Bookmark button
Alert button
Jan 01, 2024
Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

Viaarxiv icon

Large Language Models for Software Engineering: A Systematic Literature Review

Add code
Bookmark button
Alert button
Sep 12, 2023
Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, Haoyu Wang

Figure 1 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 2 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 3 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 4 for Large Language Models for Software Engineering: A Systematic Literature Review
Viaarxiv icon

Prompt Injection attack against LLM-integrated Applications

Add code
Bookmark button
Alert button
Jun 08, 2023
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

Figure 1 for Prompt Injection attack against LLM-integrated Applications
Figure 2 for Prompt Injection attack against LLM-integrated Applications
Figure 3 for Prompt Injection attack against LLM-integrated Applications
Figure 4 for Prompt Injection attack against LLM-integrated Applications
Viaarxiv icon