Alert button
Picture for Maziar Sanjabi

Maziar Sanjabi

Alert button

Text-To-Concept (and Back) via Cross-Model Alignment

Add code
Bookmark button
Alert button
May 10, 2023
Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi

Figure 1 for Text-To-Concept (and Back) via Cross-Model Alignment
Figure 2 for Text-To-Concept (and Back) via Cross-Model Alignment
Figure 3 for Text-To-Concept (and Back) via Cross-Model Alignment
Figure 4 for Text-To-Concept (and Back) via Cross-Model Alignment
Viaarxiv icon

Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning

Add code
Bookmark button
Alert button
Apr 04, 2023
Ajinkya Tejankar, Maziar Sanjabi, Qifan Wang, Sinong Wang, Hamed Firooz, Hamed Pirsiavash, Liang Tan

Figure 1 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 2 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 3 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Figure 4 for Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning
Viaarxiv icon

Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano

Add code
Bookmark button
Alert button
Oct 24, 2022
Chuan Guo, Alexandre Sablayrolles, Maziar Sanjabi

Figure 1 for Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Figure 2 for Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Figure 3 for Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Figure 4 for Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis Testing: A Lesson From Fano
Viaarxiv icon

Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning

Add code
Bookmark button
Alert button
Oct 14, 2022
John Nguyen, Jianyu Wang, Kshitiz Malik, Maziar Sanjabi, Michael Rabbat

Figure 1 for Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
Figure 2 for Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
Figure 3 for Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
Figure 4 for Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
Viaarxiv icon

FRAME: Evaluating Simulatability Metrics for Free-Text Rationales

Add code
Bookmark button
Alert button
Jul 02, 2022
Aaron Chan, Shaoliang Nie, Liang Tan, Xiaochang Peng, Hamed Firooz, Maziar Sanjabi, Xiang Ren

Figure 1 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 2 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 3 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Figure 4 for FRAME: Evaluating Simulatability Metrics for Free-Text Rationales
Viaarxiv icon

Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated Learning

Add code
Bookmark button
Alert button
Jun 30, 2022
John Nguyen, Kshitiz Malik, Maziar Sanjabi, Michael Rabbat

Figure 1 for Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated Learning
Figure 2 for Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated Learning
Figure 3 for Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated Learning
Figure 4 for Where to Begin? Exploring the Impact of Pre-Training and Initialization in Federated Learning
Viaarxiv icon

ER-TEST: Evaluating Explanation Regularization Methods for NLP Models

Add code
Bookmark button
Alert button
May 25, 2022
Brihi Joshi, Aaron Chan, Ziyi Liu, Shaoliang Nie, Maziar Sanjabi, Hamed Firooz, Xiang Ren

Figure 1 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 2 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 3 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Figure 4 for ER-TEST: Evaluating Explanation Regularization Methods for NLP Models
Viaarxiv icon

FedShuffle: Recipes for Better Use of Local Work in Federated Learning

Add code
Bookmark button
Alert button
Apr 27, 2022
Samuel Horváth, Maziar Sanjabi, Lin Xiao, Peter Richtárik, Michael Rabbat

Figure 1 for FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Figure 2 for FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Figure 3 for FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Figure 4 for FedShuffle: Recipes for Better Use of Local Work in Federated Learning
Viaarxiv icon

Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem

Add code
Bookmark button
Alert button
Apr 12, 2022
Khalil Mrini, Shaoliang Nie, Jiatao Gu, Sinong Wang, Maziar Sanjabi, Hamed Firooz

Figure 1 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 2 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 3 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Figure 4 for Detection, Disambiguation, Re-ranking: Autoregressive Entity Linking as a Multi-Task Problem
Viaarxiv icon

Federated Learning with Partial Model Personalization

Add code
Bookmark button
Alert button
Apr 08, 2022
Krishna Pillutla, Kshitiz Malik, Abdelrahman Mohamed, Michael Rabbat, Maziar Sanjabi, Lin Xiao

Figure 1 for Federated Learning with Partial Model Personalization
Figure 2 for Federated Learning with Partial Model Personalization
Figure 3 for Federated Learning with Partial Model Personalization
Figure 4 for Federated Learning with Partial Model Personalization
Viaarxiv icon