Alert button
Picture for Kailong Wang

Kailong Wang

Alert button

Beyond Fidelity: Explaining Vulnerability Localization of Learning-based Detectors

Jan 05, 2024
Baijun Cheng, Shengming Zhao, Kailong Wang, Meizhen Wang, Guangdong Bai, Ruitao Feng, Yao Guo, Lei Ma, Haoyu Wang

Viaarxiv icon

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

Jan 01, 2024
Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

Viaarxiv icon

Large Language Models for Software Engineering: A Systematic Literature Review

Sep 12, 2023
Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, Haoyu Wang

Figure 1 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 2 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 3 for Large Language Models for Software Engineering: A Systematic Literature Review
Figure 4 for Large Language Models for Software Engineering: A Systematic Literature Review
Viaarxiv icon

Prompt Injection attack against LLM-integrated Applications

Jun 08, 2023
Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

Figure 1 for Prompt Injection attack against LLM-integrated Applications
Figure 2 for Prompt Injection attack against LLM-integrated Applications
Figure 3 for Prompt Injection attack against LLM-integrated Applications
Figure 4 for Prompt Injection attack against LLM-integrated Applications
Viaarxiv icon