Alert button
Picture for Olga Ohrimenko

Olga Ohrimenko

Alert button

Information Leakage from Data Updates in Machine Learning Models

Add code
Bookmark button
Alert button
Sep 20, 2023
Tian Hui, Farhad Farokhi, Olga Ohrimenko

Viaarxiv icon

Certified Robustness of Learning-based Static Malware Detectors

Add code
Bookmark button
Alert button
Jan 31, 2023
Zhuoqun Huang, Neil G. Marchant, Keane Lucas, Lujo Bauer, Olga Ohrimenko, Benjamin I. P. Rubinstein

Figure 1 for Certified Robustness of Learning-based Static Malware Detectors
Figure 2 for Certified Robustness of Learning-based Static Malware Detectors
Figure 3 for Certified Robustness of Learning-based Static Malware Detectors
Figure 4 for Certified Robustness of Learning-based Static Malware Detectors
Viaarxiv icon

DDoD: Dual Denial of Decision Attacks on Human-AI Teams

Add code
Bookmark button
Alert button
Dec 07, 2022
Benjamin Tag, Niels van Berkel, Sunny Verma, Benjamin Zi Hao Zhao, Shlomo Berkovsky, Dali Kaafar, Vassilis Kostakos, Olga Ohrimenko

Figure 1 for DDoD: Dual Denial of Decision Attacks on Human-AI Teams
Viaarxiv icon

Verifiable and Provably Secure Machine Unlearning

Add code
Bookmark button
Alert button
Oct 17, 2022
Thorsten Eisenhofer, Doreen Riepel, Varun Chandrasekaran, Esha Ghosh, Olga Ohrimenko, Nicolas Papernot

Figure 1 for Verifiable and Provably Secure Machine Unlearning
Figure 2 for Verifiable and Provably Secure Machine Unlearning
Figure 3 for Verifiable and Provably Secure Machine Unlearning
Figure 4 for Verifiable and Provably Secure Machine Unlearning
Viaarxiv icon

Protecting Global Properties of Datasets with Distribution Privacy Mechanisms

Add code
Bookmark button
Alert button
Jul 18, 2022
Michelle Chen, Olga Ohrimenko

Figure 1 for Protecting Global Properties of Datasets with Distribution Privacy Mechanisms
Figure 2 for Protecting Global Properties of Datasets with Distribution Privacy Mechanisms
Figure 3 for Protecting Global Properties of Datasets with Distribution Privacy Mechanisms
Figure 4 for Protecting Global Properties of Datasets with Distribution Privacy Mechanisms
Viaarxiv icon

Oblivious Sampling Algorithms for Private Data Analysis

Add code
Bookmark button
Alert button
Sep 28, 2020
Sajin Sasy, Olga Ohrimenko

Figure 1 for Oblivious Sampling Algorithms for Private Data Analysis
Figure 2 for Oblivious Sampling Algorithms for Private Data Analysis
Figure 3 for Oblivious Sampling Algorithms for Private Data Analysis
Viaarxiv icon

Attribute Privacy: Framework and Mechanisms

Add code
Bookmark button
Alert button
Sep 08, 2020
Wanrong Zhang, Olga Ohrimenko, Rachel Cummings

Figure 1 for Attribute Privacy: Framework and Mechanisms
Figure 2 for Attribute Privacy: Framework and Mechanisms
Figure 3 for Attribute Privacy: Framework and Mechanisms
Viaarxiv icon

Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces

Add code
Bookmark button
Alert button
Jun 25, 2020
Dongge Han, Shruti Tople, Alex Rogers, Michael Wooldridge, Olga Ohrimenko, Sebastian Tschiatschek

Figure 1 for Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces
Figure 2 for Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces
Figure 3 for Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces
Figure 4 for Replication-Robust Payoff-Allocation with Applications in Machine Learning Marketplaces
Viaarxiv icon

Dataset-Level Attribute Leakage in Collaborative Learning

Add code
Bookmark button
Alert button
Jun 12, 2020
Wanrong Zhang, Shruti Tople, Olga Ohrimenko

Figure 1 for Dataset-Level Attribute Leakage in Collaborative Learning
Figure 2 for Dataset-Level Attribute Leakage in Collaborative Learning
Figure 3 for Dataset-Level Attribute Leakage in Collaborative Learning
Viaarxiv icon

Analyzing Privacy Loss in Updates of Natural Language Models

Add code
Bookmark button
Alert button
Jan 14, 2020
Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella-Béguelin

Figure 1 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 2 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 3 for Analyzing Privacy Loss in Updates of Natural Language Models
Figure 4 for Analyzing Privacy Loss in Updates of Natural Language Models
Viaarxiv icon