Alert button
Picture for Jingfeng Zhang

Jingfeng Zhang

Alert button

GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks

Add code
Bookmark button
Alert button
Feb 06, 2023
Salah Ghamizi, Jingfeng Zhang, Maxime Cordy, Mike Papadakis, Masashi Sugiyama, Yves Le Traon

Figure 1 for GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks
Figure 2 for GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks
Figure 3 for GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks
Figure 4 for GAT: Guided Adversarial Training with Pareto-optimal Auxiliary Tasks
Viaarxiv icon

Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks

Add code
Bookmark button
Alert button
Nov 01, 2022
Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama

Figure 1 for Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Figure 2 for Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Figure 3 for Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Figure 4 for Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Viaarxiv icon

FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods

Add code
Bookmark button
Alert button
Aug 26, 2022
Lichen Jia, Bowen Tang, Chenggang Wu, Zhe Wang, Zihan Jiang, Yuanming Lai, Yan Kang, Ning Liu, Jingfeng Zhang

Figure 1 for FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods
Figure 2 for FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods
Figure 3 for FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods
Figure 4 for FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods
Viaarxiv icon

Accelerating Score-based Generative Models for High-Resolution Image Synthesis

Add code
Bookmark button
Alert button
Jun 10, 2022
Hengyuan Ma, Li Zhang, Xiatian Zhu, Jingfeng Zhang, Jianfeng Feng

Figure 1 for Accelerating Score-based Generative Models for High-Resolution Image Synthesis
Figure 2 for Accelerating Score-based Generative Models for High-Resolution Image Synthesis
Figure 3 for Accelerating Score-based Generative Models for High-Resolution Image Synthesis
Figure 4 for Accelerating Score-based Generative Models for High-Resolution Image Synthesis
Viaarxiv icon

Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition

Add code
Bookmark button
Alert button
Apr 22, 2022
Yunqing Hu, Xuan Jin, Yin Zhang, Haiwen Hong, Jingfeng Zhang, Feihu Yan, Yuan He, Hui Xue

Figure 1 for Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition
Figure 2 for Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition
Figure 3 for Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition
Figure 4 for Diverse Instance Discovery: Vision-Transformer for Instance-Aware Multi-Label Image Recognition
Viaarxiv icon

WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice

Add code
Bookmark button
Alert button
Mar 25, 2022
Yunjie Ge, Qian Wang, Jingfeng Zhang, Juntao Zhou, Yunzhu Zhang, Chao Shen

Figure 1 for WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Figure 2 for WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Figure 3 for WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Figure 4 for WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Viaarxiv icon

On the Effectiveness of Adversarial Training against Backdoor Attacks

Add code
Bookmark button
Alert button
Feb 22, 2022
Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Figure 1 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 2 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 3 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 4 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Viaarxiv icon

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Add code
Bookmark button
Alert button
Feb 07, 2022
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 2 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 3 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 4 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Viaarxiv icon

Towards Adversarially Robust Deep Image Denoising

Add code
Bookmark button
Alert button
Jan 13, 2022
Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Figure 1 for Towards Adversarially Robust Deep Image Denoising
Figure 2 for Towards Adversarially Robust Deep Image Denoising
Figure 3 for Towards Adversarially Robust Deep Image Denoising
Figure 4 for Towards Adversarially Robust Deep Image Denoising
Viaarxiv icon