Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Matthew Jagielski

Extracting Training Data from Large Language Models


Dec 14, 2020
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, Colin Raffel


  Access Paper or Ask Questions

Subpopulation Data Poisoning Attacks


Jun 24, 2020
Matthew Jagielski, Giorgio Severi, Niklas Pousette Harger, Alina Oprea


  Access Paper or Ask Questions

Auditing Differentially Private Machine Learning: How Private is Private SGD?


Jun 13, 2020
Matthew Jagielski, Jonathan Ullman, Alina Oprea


  Access Paper or Ask Questions

Cryptanalytic Extraction of Neural Network Models


Mar 10, 2020
Nicholas Carlini, Matthew Jagielski, Ilya Mironov


  Access Paper or Ask Questions

High-Fidelity Extraction of Neural Network Models


Sep 03, 2019
Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, Nicolas Papernot


  Access Paper or Ask Questions

Differentially Private Fair Learning


Dec 06, 2018
Matthew Jagielski, Michael Kearns, Jieming Mao, Alina Oprea, Aaron Roth, Saeed Sharifi-Malvajerdi, Jonathan Ullman


  Access Paper or Ask Questions

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks


Sep 08, 2018
Ambra Demontis, Marco Melis, Maura Pintor, Matthew Jagielski, Battista Biggio, Alina Oprea, Cristina Nita-Rotaru, Fabio Roli


  Access Paper or Ask Questions

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning


Apr 01, 2018
Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li

* Preprint of the work accepted for publication at the 39th IEEE Symposium on Security and Privacy, San Francisco, CA, USA, May 21-23, 2018 

  Access Paper or Ask Questions