Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic

Nov 18, 2019
Zhen Xiang, David J. Miller, George Kesidis


  Access Model/Code and Paper
Notes on Lipschitz Margin, Lipschitz Margin Training, and Lipschitz Margin p-Values for Deep Neural Network Classifiers

Oct 15, 2019
George Kesidis, David J. Miller


  Access Model/Code and Paper
Revealing Backdoors, Post-Training, in DNN Classifiers via Novel Inference on Optimized Perturbations Inducing Group Misclassification

Aug 27, 2019
Zhen Xiang, David J. Miller, George Kesidis


  Access Model/Code and Paper
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks

May 13, 2019
David J. Miller, Zhen Xiang, George Kesidis


  Access Model/Code and Paper
A Mixture Model Based Defense for Data Poisoning Attacks Against Naive Bayes Spam Filters

Oct 31, 2018
David J. Miller, Xinyi Hu, Zhen Xiang, George Kesidis


  Access Model/Code and Paper
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time

Jun 28, 2018
David J. Miller, Yulia Wang, George Kesidis


  Access Model/Code and Paper
ATD: Anomalous Topic Discovery in High Dimensional Discrete Data

May 20, 2016
Hossein Soleimani, David J. Miller


  Access Model/Code and Paper
Convex Analysis of Mixtures for Separating Non-negative Well-grounded Sources

Dec 10, 2015
Yitan Zhu, Niya Wang, David J. Miller, Yue Wang

* 15 pages, 9 figures, 2 tables 

  Access Model/Code and Paper
Detecting Clusters of Anomalies on Low-Dimensional Feature Subsets with Application to Network Traffic Flow Data

Jun 10, 2015
Zhicong Qiu, David J. Miller, George Kesidis


  Access Model/Code and Paper
Parsimonious Topic Models with Salient Word Discovery

Sep 11, 2014
Hossein Soleimani, David J. Miller

* IEEE Transaction on Knowledge and Data Engineering, 27 (2015) 824-837 

  Access Model/Code and Paper