Picture for Zhensu Sun

Zhensu Sun

LLM as Runtime Error Handler: A Promising Pathway to Adaptive Self-Healing of Software Systems

Add code
Aug 02, 2024
Figure 1 for LLM as Runtime Error Handler: A Promising Pathway to Adaptive Self-Healing of Software Systems
Figure 2 for LLM as Runtime Error Handler: A Promising Pathway to Adaptive Self-Healing of Software Systems
Figure 3 for LLM as Runtime Error Handler: A Promising Pathway to Adaptive Self-Healing of Software Systems
Figure 4 for LLM as Runtime Error Handler: A Promising Pathway to Adaptive Self-Healing of Software Systems
Viaarxiv icon

AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation

Add code
Apr 25, 2024
Figure 1 for AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Figure 2 for AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Figure 3 for AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Figure 4 for AI Coders Are Among Us: Rethinking Programming Language Grammar Towards Efficient Code Generation
Viaarxiv icon

Reversible Jump Attack to Textual Classifiers with Modification Reduction

Add code
Mar 21, 2024
Figure 1 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 2 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 3 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Figure 4 for Reversible Jump Attack to Textual Classifiers with Modification Reduction
Viaarxiv icon

When Neural Code Completion Models Size up the Situation: Attaining Cheaper and Faster Completion through Dynamic Model Inference

Add code
Jan 18, 2024
Viaarxiv icon

CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models

Add code
Aug 28, 2023
Figure 1 for CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models
Figure 2 for CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models
Figure 3 for CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models
Figure 4 for CodeMark: Imperceptible Watermarking for Code Datasets against Neural Code Completion Models
Viaarxiv icon

Data Augmentation Approaches for Source Code Models: A Survey

Add code
Jun 12, 2023
Figure 1 for Data Augmentation Approaches for Source Code Models: A Survey
Figure 2 for Data Augmentation Approaches for Source Code Models: A Survey
Figure 3 for Data Augmentation Approaches for Source Code Models: A Survey
Figure 4 for Data Augmentation Approaches for Source Code Models: A Survey
Viaarxiv icon

Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

Add code
Mar 01, 2023
Figure 1 for Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process
Figure 2 for Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process
Figure 3 for Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process
Figure 4 for Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process
Viaarxiv icon

Learning to Prevent Profitless Neural Code Completion

Add code
Sep 13, 2022
Figure 1 for Learning to Prevent Profitless Neural Code Completion
Figure 2 for Learning to Prevent Profitless Neural Code Completion
Figure 3 for Learning to Prevent Profitless Neural Code Completion
Figure 4 for Learning to Prevent Profitless Neural Code Completion
Viaarxiv icon

On the Importance of Building High-quality Training Datasets for Neural Code Search

Add code
Feb 14, 2022
Figure 1 for On the Importance of Building High-quality Training Datasets for Neural Code Search
Figure 2 for On the Importance of Building High-quality Training Datasets for Neural Code Search
Figure 3 for On the Importance of Building High-quality Training Datasets for Neural Code Search
Figure 4 for On the Importance of Building High-quality Training Datasets for Neural Code Search
Viaarxiv icon

CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning

Add code
Oct 25, 2021
Figure 1 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 2 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 3 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 4 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Viaarxiv icon