Alert button
Picture for Martine De Cock

Martine De Cock

Alert button

Privacy-Preserving Fair Item Ranking

Mar 06, 2023
Jia Ao Sun, Sikha Pentyala, Martine De Cock, Golnoosh Farnadi

Figure 1 for Privacy-Preserving Fair Item Ranking
Figure 2 for Privacy-Preserving Fair Item Ranking
Figure 3 for Privacy-Preserving Fair Item Ranking

Users worldwide access massive amounts of curated data in the form of rankings on a daily basis. The societal impact of this ease of access has been studied and work has been done to propose and enforce various notions of fairness in rankings. Current computational methods for fair item ranking rely on disclosing user data to a centralized server, which gives rise to privacy concerns for the users. This work is the first to advance research at the conjunction of producer (item) fairness and consumer (user) privacy in rankings by exploring the incorporation of privacy-preserving techniques; specifically, differential privacy and secure multi-party computation. Our work extends the equity of amortized attention ranking mechanism to be privacy-preserving, and we evaluate its effects with respect to privacy, fairness, and ranking quality. Our results using real-world datasets show that we are able to effectively preserve the privacy of users and mitigate unfairness of items without making additional sacrifices to the quality of rankings in comparison to the ranking mechanism in the clear.

Viaarxiv icon

Secure Multiparty Computation for Synthetic Data Generation from Distributed Data

Oct 13, 2022
Mayana Pereira, Sikha Pentyala, Anderson Nascimento, Rafael T. de Sousa Jr., Martine De Cock

Figure 1 for Secure Multiparty Computation for Synthetic Data Generation from Distributed Data
Figure 2 for Secure Multiparty Computation for Synthetic Data Generation from Distributed Data

Legal and ethical restrictions on accessing relevant data inhibit data science research in critical domains such as health, finance, and education. Synthetic data generation algorithms with privacy guarantees are emerging as a paradigm to break this data logjam. Existing approaches, however, assume that the data holders supply their raw data to a trusted curator, who uses it as fuel for synthetic data generation. This severely limits the applicability, as much of the valuable data in the world is locked up in silos, controlled by entities who cannot show their data to each other or a central aggregator without raising privacy concerns. To overcome this roadblock, we propose the first solution in which data holders only share encrypted data for differentially private synthetic data generation. Data holders send shares to servers who perform Secure Multiparty Computation (MPC) computations while the original data stays encrypted. We instantiate this idea in an MPC protocol for the Multiplicative Weights with Exponential Mechanism (MWEM) algorithm to generate synthetic data based on real data originating from many data holders without reliance on a single point of failure.

Viaarxiv icon

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning

May 23, 2022
Sikha Pentyala, Nicola Neophytou, Anderson Nascimento, Martine De Cock, Golnoosh Farnadi

Figure 1 for PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning
Figure 2 for PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning

Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients' data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.

Viaarxiv icon

PrivFair: a Library for Privacy-Preserving Fairness Auditing

Feb 09, 2022
Sikha Pentyala, David Melanson, Martine De Cock, Golnoosh Farnadi

Figure 1 for PrivFair: a Library for Privacy-Preserving Fairness Auditing
Figure 2 for PrivFair: a Library for Privacy-Preserving Fairness Auditing
Figure 3 for PrivFair: a Library for Privacy-Preserving Fairness Auditing

Machine learning (ML) has become prominent in applications that directly affect people's quality of life, including in healthcare, justice, and finance. ML models have been found to exhibit discrimination based on sensitive attributes such as gender, race, or disability. Assessing if an ML model is free of bias remains challenging to date, and by definition has to be done with sensitive user characteristics that are subject of anti-discrimination and data protection law. Existing libraries for fairness auditing of ML models offer no mechanism to protect the privacy of the audit data. We present PrivFair, a library for privacy-preserving fairness audits of ML models. Through the use of Secure Multiparty Computation (MPC), PrivFair protects the confidentiality of the model under audit and the sensitive data used for the audit, hence it supports scenarios in which a proprietary classifier owned by a company is audited using sensitive audit data from an external investigator. We demonstrate the use of PrivFair for group fairness auditing with tabular data or image data, without requiring the investigator to disclose their data to anyone in an unencrypted manner, or the model owner to reveal their model parameters to anyone in plaintext.

Viaarxiv icon

Training Differentially Private Models with Secure Multiparty Computation

Feb 05, 2022
Sikha Pentyala, Davis Railsback, Ricardo Maia, Rafael Dowsley, David Melanson, Anderson Nascimento, Martine De Cock

Figure 1 for Training Differentially Private Models with Secure Multiparty Computation
Figure 2 for Training Differentially Private Models with Secure Multiparty Computation
Figure 3 for Training Differentially Private Models with Secure Multiparty Computation
Figure 4 for Training Differentially Private Models with Secure Multiparty Computation

We address the problem of learning a machine learning model from training data that originates at multiple data owners while providing formal privacy guarantees regarding the protection of each owner's data. Existing solutions based on Differential Privacy (DP) achieve this at the cost of a drop in accuracy. Solutions based on Secure Multiparty Computation (MPC) do not incur such accuracy loss but leak information when the trained model is made publicly available. We propose an MPC solution for training DP models. Our solution relies on an MPC protocol for model training, and an MPC protocol for perturbing the trained model coefficients with Laplace noise in a privacy-preserving manner. The resulting MPC+DP approach achieves higher accuracy than a pure DP approach while providing the same formal privacy guarantees. Our work obtained first place in the iDASH2021 Track III competition on confidential computing for secure genome analysis.

Viaarxiv icon

Privacy-Preserving Training of Tree Ensembles over Continuous Data

Jun 05, 2021
Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen

Figure 1 for Privacy-Preserving Training of Tree Ensembles over Continuous Data
Figure 2 for Privacy-Preserving Training of Tree Ensembles over Continuous Data
Figure 3 for Privacy-Preserving Training of Tree Ensembles over Continuous Data
Figure 4 for Privacy-Preserving Training of Tree Ensembles over Continuous Data

Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard ``in the clear'' algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (``extra-trees'') on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with 1000s of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution orders of magnitude more efficient than the existing approaches, which are based on oblivious sorting.

Viaarxiv icon

Privacy-Preserving Feature Selection with Secure Multiparty Computation

Feb 06, 2021
Xiling Li, Rafael Dowsley, Martine De Cock

Figure 1 for Privacy-Preserving Feature Selection with Secure Multiparty Computation
Figure 2 for Privacy-Preserving Feature Selection with Secure Multiparty Computation
Figure 3 for Privacy-Preserving Feature Selection with Secure Multiparty Computation

Existing work on privacy-preserving machine learning with Secure Multiparty Computation (MPC) is almost exclusively focused on model training and on inference with trained models, thereby overlooking the important data pre-processing stage. In this work, we propose the first MPC based protocol for private feature selection based on the filter method, which is independent of model training, and can be used in combination with any MPC protocol to rank features. We propose an efficient feature scoring protocol based on Gini impurity to this end. To demonstrate the feasibility of our approach for practical data science, we perform experiments with the proposed MPC protocols for feature selection in a commonly used machine-learning-as-a-service configuration where computations are outsourced to multiple servers, with semi-honest and with malicious adversaries. Regarding effectiveness, we show that secure feature selection with the proposed protocols improves the accuracy of classifiers on a variety of real-world data sets, without leaking information about the feature values or even which features were selected. Regarding efficiency, we document runtimes ranging from several seconds to an hour for our protocols to finish, depending on the size of the data set and the security settings.

Viaarxiv icon

Privacy-Preserving Video Classification with Convolutional Neural Networks

Feb 06, 2021
Sikha Pentyala, Rafael Dowsley, Martine De Cock

Figure 1 for Privacy-Preserving Video Classification with Convolutional Neural Networks
Figure 2 for Privacy-Preserving Video Classification with Convolutional Neural Networks
Figure 3 for Privacy-Preserving Video Classification with Convolutional Neural Networks
Figure 4 for Privacy-Preserving Video Classification with Convolutional Neural Networks

Many video classification applications require access to personal data, thereby posing an invasive security risk to the users' privacy. We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks that allows a party to infer a label from a video without necessitating the video owner to disclose their video to other entities in an unencrypted manner. Similarly, our approach removes the requirement of the classifier owner from revealing their model parameters to outside entities in plaintext. To this end, we combine existing Secure Multi-Party Computation (MPC) protocols for private image classification with our novel MPC protocols for oblivious single-frame selection and secure label aggregation across frames. The result is an end-to-end privacy-preserving video classification pipeline. We evaluate our proposed solution in an application for private human emotion recognition. Our results across a variety of security settings, spanning honest and dishonest majority configurations of the computing parties, and for both passive and active adversaries, demonstrate that videos can be classified with state-of-the-art accuracy, and without leaking sensitive user information.

Viaarxiv icon

Private Speech Characterization with Secure Multiparty Computation

Jul 01, 2020
Kyle Bittner, Martine De Cock, Rafael Dowsley

Figure 1 for Private Speech Characterization with Secure Multiparty Computation
Figure 2 for Private Speech Characterization with Secure Multiparty Computation
Figure 3 for Private Speech Characterization with Secure Multiparty Computation
Figure 4 for Private Speech Characterization with Secure Multiparty Computation

Deep learning in audio signal processing, such as human voice audio signal classification, is a rich application area of machine learning. Legitimate use cases include voice authentication, gunfire detection, and emotion recognition. While there are clear advantages to automated human speech classification, application developers can gain knowledge beyond the professed scope from unprotected audio signal processing. In this paper we propose the first privacy-preserving solution for deep learning-based audio classification that is provably secure. Our approach, which is based on Secure Multiparty Computation, allows to classify a speech signal of one party (Alice) with a deep neural network of another party (Bob) without Bob ever seeing Alice's speech signal in an unencrypted manner. As threat models, we consider both passive security, i.e. with semi-honest parties who follow the instructions of the cryptographic protocols, as well as active security, i.e. with malicious parties who deviate from the protocols. We evaluate the efficiency-security-accuracy trade-off of the proposed solution in a use case for privacy-preserving emotion detection from speech with a convolutional neural network. In the semi-honest case we can classify a speech signal in under 0.3 sec; in the malicious case it takes $\sim$1.6 sec. In both cases there is no leakage of information, and we achieve classification accuracies that are the same as when computations are done on unencrypted data.

Viaarxiv icon

Inline Detection of DGA Domains Using Side Information

Mar 12, 2020
Raaghavi Sivaguru, Jonathan Peck, Femi Olumofin, Anderson Nascimento, Martine De Cock

Figure 1 for Inline Detection of DGA Domains Using Side Information
Figure 2 for Inline Detection of DGA Domains Using Side Information
Figure 3 for Inline Detection of DGA Domains Using Side Information
Figure 4 for Inline Detection of DGA Domains Using Side Information

Malware applications typically use a command and control (C&C) server to manage bots to perform malicious activities. Domain Generation Algorithms (DGAs) are popular methods for generating pseudo-random domain names that can be used to establish a communication between an infected bot and the C&C server. In recent years, machine learning based systems have been widely used to detect DGAs. There are several well known state-of-the-art classifiers in the literature that can detect DGA domain names in real-time applications with high predictive performance. However, these DGA classifiers are highly vulnerable to adversarial attacks in which adversaries purposely craft domain names to evade DGA detection classifiers. In our work, we focus on hardening DGA classifiers against adversarial attacks. To this end, we train and evaluate state-of-the-art deep learning and random forest (RF) classifiers for DGA detection using side information that is harder for adversaries to manipulate than the domain name itself. Additionally, the side information features are selected such that they are easily obtainable in practice to perform inline DGA detection. The performance and robustness of these models is assessed by exposing them to one day of real-traffic data as well as domains generated by adversarial attack algorithms. We found that the DGA classifiers that rely on both the domain name and side information have high performance and are more robust against adversaries.

Viaarxiv icon