A Framework for Evaluating Gradient Leakage Attacks in Federated Learning

Apr 23, 2020
Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, Yanzhao Wu


  Access Model/Code and Paper
TOG: Targeted Adversarial Objectness Gradient Attacks on Real-time Object Detection Systems

Apr 09, 2020
Ka-Ho Chow, Ling Liu, Mehmet Emre Gursoy, Stacey Truex, Wenqi Wei, Yanzhao Wu


  Access Model/Code and Paper
Cross-Layer Strategic Ensemble Defense Against Adversarial Examples

Oct 01, 2019
Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Emre Gursoy, Stacey Truex, Yanzhao Wu

* To appear in IEEE ICNC 2020 

  Access Model/Code and Paper
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness

Aug 29, 2019
Ling Liu, Wenqi Wei, Ka-Ho Chow, Margaret Loper, Emre Gursoy, Stacey Truex, Yanzhao Wu

* To appear in IEEE MASS 2019 

  Access Model/Code and Paper
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks

Aug 21, 2019
Ka-Ho Chow, Wenqi Wei, Yanzhao Wu, Ling Liu


  Access Model/Code and Paper
Demystifying Learning Rate Polices for High Accuracy Training of Deep Neural Networks

Aug 18, 2019
Yanzhao Wu, Ling Liu, Juhyun Bae, Ka-Ho Chow, Arun Iyengar, Calton Pu, Wenqi Wei, Lei Yu, Qi Zhang


  Access Model/Code and Paper
A Comparative Measurement Study of Deep Learning as a Service Framework

Oct 29, 2018
Yanzhao Wu, Ling Liu, Calton Pu, Wenqi Cao, Semih Sahin, Wenqi Wei, Qi Zhang


  Access Model/Code and Paper
Adversarial Examples in Deep Learning: Characterization and Divergence

Oct 29, 2018
Wenqi Wei, Ling Liu, Stacey Truex, Lei Yu, Mehmet Emre Gursoy, Yanzhao Wu


  Access Model/Code and Paper