Picture for Fu Song

Fu Song

QFA2SR: Query-Free Adversarial Transfer Attacks to Speaker Recognition Systems

Add code
May 23, 2023
Viaarxiv icon

QVIP: An ILP-based Formal Verification Approach for Quantized Neural Networks

Add code
Dec 10, 2022
Viaarxiv icon

QEBVerif: Quantization Error Bound Verification of Neural Networks

Add code
Dec 06, 2022
Viaarxiv icon

Learning to Prevent Profitless Neural Code Completion

Add code
Sep 13, 2022
Figure 1 for Learning to Prevent Profitless Neural Code Completion
Figure 2 for Learning to Prevent Profitless Neural Code Completion
Figure 3 for Learning to Prevent Profitless Neural Code Completion
Figure 4 for Learning to Prevent Profitless Neural Code Completion
Viaarxiv icon

Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks

Add code
Jul 02, 2022
Figure 1 for Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks
Figure 2 for Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks
Figure 3 for Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks
Figure 4 for Abstraction and Refinement: Towards Scalable and Exact Verification of Neural Networks
Viaarxiv icon

Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition

Add code
Jun 07, 2022
Figure 1 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 2 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 3 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Figure 4 for Towards Understanding and Mitigating Audio Adversarial Examples for Speaker Recognition
Viaarxiv icon

AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems

Add code
Jun 07, 2022
Figure 1 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 2 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 3 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Figure 4 for AS2T: Arbitrary Source-To-Target Adversarial Attack on Speaker Recognition Systems
Viaarxiv icon

CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning

Add code
Oct 25, 2021
Figure 1 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 2 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 3 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Figure 4 for CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Viaarxiv icon

SEC4SR: A Security Analysis Platform for Speaker Recognition

Add code
Sep 04, 2021
Figure 1 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 2 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 3 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Figure 4 for SEC4SR: A Security Analysis Platform for Speaker Recognition
Viaarxiv icon

Attack as Defense: Characterizing Adversarial Examples using Robustness

Add code
Mar 13, 2021
Figure 1 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 2 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 3 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Figure 4 for Attack as Defense: Characterizing Adversarial Examples using Robustness
Viaarxiv icon