Alert button
Picture for Tom Goldstein

Tom Goldstein

Alert button

GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training

Add code
Bookmark button
Alert button
Feb 16, 2021
Chen Zhu, Renkun Ni, Zheng Xu, Kezhi Kong, W. Ronny Huang, Tom Goldstein

Figure 1 for GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Figure 2 for GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Figure 3 for GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Figure 4 for GradInit: Learning to Initialize Neural Networks for Stable and Efficient Training
Viaarxiv icon

Technical Challenges for Training Fair Neural Networks

Add code
Bookmark button
Alert button
Feb 12, 2021
Valeriia Cherepanova, Vedant Nanda, Micah Goldblum, John P. Dickerson, Tom Goldstein

Figure 1 for Technical Challenges for Training Fair Neural Networks
Figure 2 for Technical Challenges for Training Fair Neural Networks
Figure 3 for Technical Challenges for Training Fair Neural Networks
Figure 4 for Technical Challenges for Training Fair Neural Networks
Viaarxiv icon

LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition

Add code
Bookmark button
Alert button
Jan 25, 2021
Valeriia Cherepanova, Micah Goldblum, Harrison Foley, Shiyuan Duan, John Dickerson, Gavin Taylor, Tom Goldstein

Figure 1 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 2 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 3 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Figure 4 for LowKey: Leveraging Adversarial Attacks to Protect Social Media Users from Facial Recognition
Viaarxiv icon

Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Add code
Bookmark button
Alert button
Dec 30, 2020
Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

Figure 1 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 2 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 3 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 4 for Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Viaarxiv icon

Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses

Add code
Bookmark button
Alert button
Dec 18, 2020
Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry, Bo Li, Tom Goldstein

Figure 1 for Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 2 for Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 3 for Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Figure 4 for Data Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Viaarxiv icon

Analyzing the Machine Learning Conference Review Process

Add code
Bookmark button
Alert button
Nov 26, 2020
David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Figure 1 for Analyzing the Machine Learning Conference Review Process
Figure 2 for Analyzing the Machine Learning Conference Review Process
Figure 3 for Analyzing the Machine Learning Conference Review Process
Figure 4 for Analyzing the Machine Learning Conference Review Process
Viaarxiv icon

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff

Add code
Bookmark button
Alert button
Nov 18, 2020
Eitan Borgnia, Valeriia Cherepanova, Liam Fowl, Amin Ghiasi, Jonas Geiping, Micah Goldblum, Tom Goldstein, Arjun Gupta

Figure 1 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 2 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 3 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Figure 4 for Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Viaarxiv icon

An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process

Add code
Bookmark button
Alert button
Oct 26, 2020
David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Figure 1 for An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process
Figure 2 for An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process
Figure 3 for An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process
Figure 4 for An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process
Viaarxiv icon

Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks

Add code
Bookmark button
Alert button
Oct 24, 2020
Huimin Zeng, Chen Zhu, Tom Goldstein, Furong Huang

Figure 1 for Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Figure 2 for Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Figure 3 for Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Figure 4 for Are Adversarial Examples Created Equal? A Learnable Weighted Minimax Risk for Robustness under Non-uniform Attacks
Viaarxiv icon