Alert button
Picture for Adam Noack

Adam Noack

Alert button

Identifying Adversarial Attacks on Text Classifiers

Add code
Bookmark button
Alert button
Jan 21, 2022
Zhouhang Xie, Jonathan Brophy, Adam Noack, Wencong You, Kalyani Asthana, Carter Perkins, Sabrina Reis, Sameer Singh, Daniel Lowd

Figure 1 for Identifying Adversarial Attacks on Text Classifiers
Figure 2 for Identifying Adversarial Attacks on Text Classifiers
Figure 3 for Identifying Adversarial Attacks on Text Classifiers
Figure 4 for Identifying Adversarial Attacks on Text Classifiers
Viaarxiv icon

Does Interpretability of Neural Networks Imply Adversarial Robustness?

Add code
Bookmark button
Alert button
Dec 07, 2019
Adam Noack, Isaac Ahern, Dejing Dou, Boyang Li

Figure 1 for Does Interpretability of Neural Networks Imply Adversarial Robustness?
Figure 2 for Does Interpretability of Neural Networks Imply Adversarial Robustness?
Figure 3 for Does Interpretability of Neural Networks Imply Adversarial Robustness?
Figure 4 for Does Interpretability of Neural Networks Imply Adversarial Robustness?
Viaarxiv icon

NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 15, 2019
Isaac Ahern, Adam Noack, Luis Guzman-Nateras, Dejing Dou, Boyang Li, Jun Huan

Figure 1 for NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Figure 2 for NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Figure 3 for NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Figure 4 for NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Viaarxiv icon