Picture for Mohammad Malekzadeh

Mohammad Malekzadeh

Salted Inference: Enhancing Privacy while Maintaining Efficiency of Split Inference in Mobile Computing

Add code
Oct 20, 2023
Viaarxiv icon

Latent Masking for Multimodal Self-supervised Learning in Health Timeseries

Jul 31, 2023
Figure 1 for Latent Masking for Multimodal Self-supervised Learning in Health Timeseries
Figure 2 for Latent Masking for Multimodal Self-supervised Learning in Health Timeseries
Figure 3 for Latent Masking for Multimodal Self-supervised Learning in Health Timeseries
Figure 4 for Latent Masking for Multimodal Self-supervised Learning in Health Timeseries
Viaarxiv icon

Vicious Classifiers: Data Reconstruction Attack at Inference Time

Add code
Dec 08, 2022
Figure 1 for Vicious Classifiers: Data Reconstruction Attack at Inference Time
Figure 2 for Vicious Classifiers: Data Reconstruction Attack at Inference Time
Figure 3 for Vicious Classifiers: Data Reconstruction Attack at Inference Time
Figure 4 for Vicious Classifiers: Data Reconstruction Attack at Inference Time
Viaarxiv icon

Centaur: Federated Learning for Constrained Edge Devices

Add code
Nov 12, 2022
Figure 1 for Centaur: Federated Learning for Constrained Edge Devices
Figure 2 for Centaur: Federated Learning for Constrained Edge Devices
Figure 3 for Centaur: Federated Learning for Constrained Edge Devices
Figure 4 for Centaur: Federated Learning for Constrained Edge Devices
Viaarxiv icon

Efficient Hyperparameter Optimization for Differentially Private Deep Learning

Add code
Aug 09, 2021
Figure 1 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 2 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 3 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Figure 4 for Efficient Hyperparameter Optimization for Differentially Private Deep Learning
Viaarxiv icon

Quantifying Information Leakage from Gradients

May 28, 2021
Figure 1 for Quantifying Information Leakage from Gradients
Figure 2 for Quantifying Information Leakage from Gradients
Figure 3 for Quantifying Information Leakage from Gradients
Figure 4 for Quantifying Information Leakage from Gradients
Viaarxiv icon

Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs

Add code
May 25, 2021
Figure 1 for Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs
Figure 2 for Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs
Figure 3 for Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs
Figure 4 for Honest-but-Curious Nets: Sensitive Attributes of Private Inputs can be Secretly Coded into the Entropy of Classifiers' Outputs
Viaarxiv icon

Dopamine: Differentially Private Federated Learning on Medical Data

Add code
Jan 29, 2021
Figure 1 for Dopamine: Differentially Private Federated Learning on Medical Data
Figure 2 for Dopamine: Differentially Private Federated Learning on Medical Data
Viaarxiv icon

Layer-wise Characterization of Latent Information Leakage in Federated Learning

Oct 17, 2020
Figure 1 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 2 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 3 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Figure 4 for Layer-wise Characterization of Latent Information Leakage in Federated Learning
Viaarxiv icon

Running Neural Networks on the NIC

Add code
Sep 04, 2020
Figure 1 for Running Neural Networks on the NIC
Figure 2 for Running Neural Networks on the NIC
Figure 3 for Running Neural Networks on the NIC
Figure 4 for Running Neural Networks on the NIC
Viaarxiv icon