Picture for Hongwei Yao

Hongwei Yao

TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs

Add code
Jul 12, 2024
Viaarxiv icon

Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution

Add code
May 08, 2024
Viaarxiv icon

PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models

Add code
Oct 19, 2023
Figure 1 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 2 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 3 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Figure 4 for PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models
Viaarxiv icon

RemovalNet: DNN Fingerprint Removal Attacks

Add code
Aug 31, 2023
Figure 1 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 2 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 3 for RemovalNet: DNN Fingerprint Removal Attacks
Figure 4 for RemovalNet: DNN Fingerprint Removal Attacks
Viaarxiv icon

FDINet: Protecting against DNN Model Extraction via Feature Distortion Index

Add code
Jun 22, 2023
Figure 1 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 2 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 3 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Figure 4 for FDINet: Protecting against DNN Model Extraction via Feature Distortion Index
Viaarxiv icon