Picture for Hyoungshick Kim

Hyoungshick Kim

Blind-Match: Efficient Homomorphic Encryption-Based 1:N Matching for Privacy-Preserving Biometric Identification

Add code
Aug 12, 2024
Viaarxiv icon

Expectations Versus Reality: Evaluating Intrusion Detection Systems in Practice

Add code
Mar 28, 2024
Viaarxiv icon

Single-Class Target-Specific Attack against Interpretable Deep Learning Systems

Add code
Jul 12, 2023
Viaarxiv icon

Tracking Dataset IP Use in Deep Neural Networks

Add code
Nov 24, 2022
Viaarxiv icon

Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World

Add code
Jan 21, 2022
Figure 1 for Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Figure 2 for Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Figure 3 for Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Figure 4 for Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Viaarxiv icon

Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things

Add code
Mar 03, 2021
Figure 1 for Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things
Figure 2 for Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things
Figure 3 for Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things
Figure 4 for Evaluation and Optimization of Distributed Machine Learning Techniques for Internet of Things
Viaarxiv icon

DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN

Add code
Jan 12, 2021
Figure 1 for DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
Figure 2 for DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
Figure 3 for DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
Figure 4 for DeepiSign: Invisible Fragile Watermark to Protect the Integrityand Authenticity of CNN
Viaarxiv icon

Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks

Add code
Oct 08, 2020
Figure 1 for Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Figure 2 for Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Figure 3 for Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Figure 4 for Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Viaarxiv icon

Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review

Add code
Aug 02, 2020
Figure 1 for Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Figure 2 for Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Figure 3 for Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Figure 4 for Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Viaarxiv icon

DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation

Add code
Jun 16, 2020
Figure 1 for DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation
Figure 2 for DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation
Figure 3 for DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation
Figure 4 for DeepCapture: Image Spam Detection Using Deep Learning and Data Augmentation
Viaarxiv icon