Alert button
Picture for Jamie Hayes

Jamie Hayes

Alert button

Dj

Learning to be adversarially robust and differentially private

Add code
Bookmark button
Alert button
Jan 06, 2022
Jamie Hayes, Borja Balle, M. Pawan Kumar

Figure 1 for Learning to be adversarially robust and differentially private
Figure 2 for Learning to be adversarially robust and differentially private
Figure 3 for Learning to be adversarially robust and differentially private
Figure 4 for Learning to be adversarially robust and differentially private
Viaarxiv icon

Towards transformation-resilient provenance detection of digital media

Add code
Bookmark button
Alert button
Nov 14, 2020
Jamie Hayes, Krishnamurthy, Dvijotham, Yutian Chen, Sander Dieleman, Pushmeet Kohli, Norman Casagrande

Figure 1 for Towards transformation-resilient provenance detection of digital media
Figure 2 for Towards transformation-resilient provenance detection of digital media
Figure 3 for Towards transformation-resilient provenance detection of digital media
Figure 4 for Towards transformation-resilient provenance detection of digital media
Viaarxiv icon

Adaptive Traffic Fingerprinting: Large-scale Inference under Realistic Assumptions

Add code
Bookmark button
Alert button
Oct 19, 2020
Vasilios Mavroudis, Jamie Hayes

Figure 1 for Adaptive Traffic Fingerprinting: Large-scale Inference under Realistic Assumptions
Figure 2 for Adaptive Traffic Fingerprinting: Large-scale Inference under Realistic Assumptions
Figure 3 for Adaptive Traffic Fingerprinting: Large-scale Inference under Realistic Assumptions
Figure 4 for Adaptive Traffic Fingerprinting: Large-scale Inference under Realistic Assumptions
Viaarxiv icon

Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy

Add code
Bookmark button
Alert button
Sep 08, 2020
Mohammad Naseri, Jamie Hayes, Emiliano De Cristofaro

Figure 1 for Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy
Figure 2 for Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy
Figure 3 for Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy
Figure 4 for Toward Robustness and Privacy in Federated Learning: Experimenting with Local and Central Differential Privacy
Viaarxiv icon

Provable trade-offs between private & robust machine learning

Add code
Bookmark button
Alert button
Jun 08, 2020
Jamie Hayes

Figure 1 for Provable trade-offs between private & robust machine learning
Figure 2 for Provable trade-offs between private & robust machine learning
Figure 3 for Provable trade-offs between private & robust machine learning
Figure 4 for Provable trade-offs between private & robust machine learning
Viaarxiv icon

Extensions and limitations of randomized smoothing for robustness guarantees

Add code
Bookmark button
Alert button
Jun 07, 2020
Jamie Hayes

Figure 1 for Extensions and limitations of randomized smoothing for robustness guarantees
Figure 2 for Extensions and limitations of randomized smoothing for robustness guarantees
Figure 3 for Extensions and limitations of randomized smoothing for robustness guarantees
Figure 4 for Extensions and limitations of randomized smoothing for robustness guarantees
Viaarxiv icon

Unique properties of adversarially trained linear classifiers on Gaussian data

Add code
Bookmark button
Alert button
Jun 06, 2020
Jamie Hayes

Figure 1 for Unique properties of adversarially trained linear classifiers on Gaussian data
Figure 2 for Unique properties of adversarially trained linear classifiers on Gaussian data
Figure 3 for Unique properties of adversarially trained linear classifiers on Gaussian data
Figure 4 for Unique properties of adversarially trained linear classifiers on Gaussian data
Viaarxiv icon

Contamination Attacks and Mitigation in Multi-Party Machine Learning

Add code
Bookmark button
Alert button
Jan 08, 2019
Jamie Hayes, Olga Ohrimenko

Figure 1 for Contamination Attacks and Mitigation in Multi-Party Machine Learning
Figure 2 for Contamination Attacks and Mitigation in Multi-Party Machine Learning
Figure 3 for Contamination Attacks and Mitigation in Multi-Party Machine Learning
Figure 4 for Contamination Attacks and Mitigation in Multi-Party Machine Learning
Viaarxiv icon

A note on hyperparameters in black-box adversarial examples

Add code
Bookmark button
Alert button
Nov 15, 2018
Jamie Hayes

Figure 1 for A note on hyperparameters in black-box adversarial examples
Viaarxiv icon

Evading classifiers in discrete domains with provable optimality guarantees

Add code
Bookmark button
Alert button
Oct 25, 2018
Bogdan Kulynych, Jamie Hayes, Nikita Samarin, Carmela Troncoso

Figure 1 for Evading classifiers in discrete domains with provable optimality guarantees
Figure 2 for Evading classifiers in discrete domains with provable optimality guarantees
Figure 3 for Evading classifiers in discrete domains with provable optimality guarantees
Figure 4 for Evading classifiers in discrete domains with provable optimality guarantees
Viaarxiv icon