Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Zhen Xiang

L-RED: Efficient Post-Training Detection of Imperceptible Backdoor Attacks without Access to the Training Set


Oct 21, 2020
Zhen Xiang, David J. Miller, George Kesidis


  Access Paper or Ask Questions

Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing


Oct 15, 2020
Zhen Xiang, David J. Miller, George Kesidis


  Access Paper or Ask Questions

Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic


Nov 18, 2019
Zhen Xiang, David J. Miller, George Kesidis


  Access Paper or Ask Questions

Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification


Aug 27, 2019
Zhen Xiang, David J. Miller, George Kesidis


  Access Paper or Ask Questions

Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks


May 13, 2019
David J. Miller, Zhen Xiang, George Kesidis


  Access Paper or Ask Questions

A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters


Oct 31, 2018
David J. Miller, Xinyi Hu, Zhen Xiang, George Kesidis


  Access Paper or Ask Questions