Alert button
Picture for Fatemehsadat Mireshghallah

Fatemehsadat Mireshghallah

Alert button

Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks

Add code
Bookmark button
Alert button
Mar 08, 2022
Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, Reza Shokri

Figure 1 for Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Figure 2 for Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Figure 3 for Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Figure 4 for Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
Viaarxiv icon

What Does it Mean for a Language Model to Preserve Privacy?

Add code
Bookmark button
Alert button
Feb 14, 2022
Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr

Figure 1 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 2 for What Does it Mean for a Language Model to Preserve Privacy?
Figure 3 for What Does it Mean for a Language Model to Preserve Privacy?
Viaarxiv icon

UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis

Add code
Bookmark button
Alert button
Oct 01, 2021
Fatemehsadat Mireshghallah, Vaishnavi Shrivastava, Milad Shokouhi, Taylor Berg-Kirkpatrick, Robert Sim, Dimitrios Dimitriadis

Figure 1 for UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis
Figure 2 for UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis
Figure 3 for UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis
Figure 4 for UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis
Viaarxiv icon

Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness

Add code
Bookmark button
Alert button
Sep 10, 2021
Fatemehsadat Mireshghallah, Taylor Berg-Kirkpatrick

Figure 1 for Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness
Figure 2 for Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness
Figure 3 for Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness
Figure 4 for Style Pooling: Automatic Text Style Obfuscation for Improved Classification Fairness
Viaarxiv icon

Efficient Hyperparameter Optimization for Differentially Private Deep Learning

Add code
Bookmark button
Alert button
Aug 09, 2021
Aman Priyanshu, Rakshit Naidu, Fatemehsadat Mireshghallah, Mohammad Malekzadeh

Figure 1 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 2 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 3 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 4 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Viaarxiv icon

Benchmarking Differential Privacy and Federated Learning for BERT Models

Add code
Bookmark button
Alert button
Jun 26, 2021
Priyam Basu, Tiasa Singha Roy, Rakshit Naidu, Zumrut Muftuoglu, Sahib Singh, Fatemehsadat Mireshghallah

Figure 1 for Benchmarking Differential Privacy and Federated Learning for BERT Models
Figure 2 for Benchmarking Differential Privacy and Federated Learning for BERT Models
Figure 3 for Benchmarking Differential Privacy and Federated Learning for BERT Models
Viaarxiv icon

When Differential Privacy Meets Interpretability: A Case Study

Add code
Bookmark button
Alert button
Jun 25, 2021
Rakshit Naidu, Aman Priyanshu, Aadith Kumar, Sasikanth Kotti, Haofan Wang, Fatemehsadat Mireshghallah

Figure 1 for When Differential Privacy Meets Interpretability: A Case Study
Figure 2 for When Differential Privacy Meets Interpretability: A Case Study
Figure 3 for When Differential Privacy Meets Interpretability: A Case Study
Figure 4 for When Differential Privacy Meets Interpretability: A Case Study
Viaarxiv icon

DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?

Add code
Bookmark button
Alert button
Jun 22, 2021
Archit Uniyal, Rakshit Naidu, Sasikanth Kotti, Sahib Singh, Patrik Joslin Kenfack, Fatemehsadat Mireshghallah, Andrew Trask

Figure 1 for DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
Figure 2 for DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
Figure 3 for DP-SGD vs PATE: Which Has Less Disparate Impact on Model Accuracy?
Viaarxiv icon

Privacy Regularization: Joint Privacy-Utility Optimization in Language Models

Add code
Bookmark button
Alert button
Mar 12, 2021
Fatemehsadat Mireshghallah, Huseyin A. Inan, Marcello Hasegawa, Victor Rühle, Taylor Berg-Kirkpatrick, Robert Sim

Figure 1 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 2 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 3 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Figure 4 for Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
Viaarxiv icon