Alert button
Picture for Mohammad Mahmoody

Mohammad Mahmoody

Alert button

Publicly Detectable Watermarking for Language Models

Add code
Bookmark button
Alert button
Oct 27, 2023
Jaiden Fairoze, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang

Figure 1 for Publicly Detectable Watermarking for Language Models
Figure 2 for Publicly Detectable Watermarking for Language Models
Figure 3 for Publicly Detectable Watermarking for Language Models
Figure 4 for Publicly Detectable Watermarking for Language Models
Viaarxiv icon

On Optimal Learning Under Targeted Data Poisoning

Add code
Bookmark button
Alert button
Oct 12, 2022
Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran

Figure 1 for On Optimal Learning Under Targeted Data Poisoning
Figure 2 for On Optimal Learning Under Targeted Data Poisoning
Viaarxiv icon

Overparameterized (robust) models from computational constraints

Add code
Bookmark button
Alert button
Aug 27, 2022
Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Mingyuan Wang

Viaarxiv icon

Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning

Add code
Bookmark button
Alert button
Feb 07, 2022
Ji Gao, Sanjam Garg, Mohammad Mahmoody, Prashant Nalini Vasudevan

Figure 1 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 2 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 3 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Figure 4 for Deletion Inference, Reconstruction, and Compliance in Machine (Un)Learning
Viaarxiv icon

Learning and Certification under Instance-targeted Poisoning

Add code
Bookmark button
Alert button
May 18, 2021
Ji Gao, Amin Karbasi, Mohammad Mahmoody

Figure 1 for Learning and Certification under Instance-targeted Poisoning
Figure 2 for Learning and Certification under Instance-targeted Poisoning
Figure 3 for Learning and Certification under Instance-targeted Poisoning
Viaarxiv icon

An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?

Add code
Bookmark button
Alert button
Nov 10, 2020
Nicholas Carlini, Samuel Deng, Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Shuang Song, Abhradeep Thakurta, Florian Tramer

Figure 1 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 2 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 3 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Figure 4 for An Attack on InstaHide: Is Private Learning Possible with Instance Encoding?
Viaarxiv icon

Obliviousness Makes Poisoning Adversaries Weaker

Add code
Bookmark button
Alert button
Mar 26, 2020
Sanjam Garg, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Abhradeep Thakurta

Figure 1 for Obliviousness Makes Poisoning Adversaries Weaker
Figure 2 for Obliviousness Makes Poisoning Adversaries Weaker
Figure 3 for Obliviousness Makes Poisoning Adversaries Weaker
Figure 4 for Obliviousness Makes Poisoning Adversaries Weaker
Viaarxiv icon

Computational Concentration of Measure: Optimal Bounds, Reductions, and More

Add code
Bookmark button
Alert button
Jul 11, 2019
Omid Etesami, Saeed Mahloujifar, Mohammad Mahmoody

Viaarxiv icon

Lower Bounds for Adversarially Robust PAC Learning

Add code
Bookmark button
Alert button
Jun 13, 2019
Dimitrios I. Diochnos, Saeed Mahloujifar, Mohammad Mahmoody

Viaarxiv icon

Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness

Add code
Bookmark button
Alert button
May 29, 2019
Saeed Mahloujifar, Xiao Zhang, Mohammad Mahmoody, David Evans

Figure 1 for Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Figure 2 for Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Figure 3 for Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Figure 4 for Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Viaarxiv icon