Alert button
Picture for Martin Pawelczyk

Martin Pawelczyk

Alert button

Towards Non-Adversarial Algorithmic Recourse

Add code
Bookmark button
Alert button
Mar 15, 2024
Tobias Leemann, Martin Pawelczyk, Bardh Prenkaj, Gjergji Kasneci

Figure 1 for Towards Non-Adversarial Algorithmic Recourse
Figure 2 for Towards Non-Adversarial Algorithmic Recourse
Figure 3 for Towards Non-Adversarial Algorithmic Recourse
Figure 4 for Towards Non-Adversarial Algorithmic Recourse
Viaarxiv icon

In-Context Unlearning: Language Models as Few Shot Unlearners

Add code
Bookmark button
Alert button
Oct 12, 2023
Martin Pawelczyk, Seth Neel, Himabindu Lakkaraju

Figure 1 for In-Context Unlearning: Language Models as Few Shot Unlearners
Figure 2 for In-Context Unlearning: Language Models as Few Shot Unlearners
Figure 3 for In-Context Unlearning: Language Models as Few Shot Unlearners
Figure 4 for In-Context Unlearning: Language Models as Few Shot Unlearners
Viaarxiv icon

Gaussian Membership Inference Privacy

Add code
Bookmark button
Alert button
Jun 12, 2023
Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci

Figure 1 for Gaussian Membership Inference Privacy
Figure 2 for Gaussian Membership Inference Privacy
Figure 3 for Gaussian Membership Inference Privacy
Figure 4 for Gaussian Membership Inference Privacy
Viaarxiv icon

On the Privacy Risks of Algorithmic Recourse

Add code
Bookmark button
Alert button
Nov 10, 2022
Martin Pawelczyk, Himabindu Lakkaraju, Seth Neel

Figure 1 for On the Privacy Risks of Algorithmic Recourse
Figure 2 for On the Privacy Risks of Algorithmic Recourse
Figure 3 for On the Privacy Risks of Algorithmic Recourse
Figure 4 for On the Privacy Risks of Algorithmic Recourse
Viaarxiv icon

Decomposing Counterfactual Explanations for Consequential Decision Making

Add code
Bookmark button
Alert button
Nov 03, 2022
Martin Pawelczyk, Lea Tiyavorabun, Gjergji Kasneci

Figure 1 for Decomposing Counterfactual Explanations for Consequential Decision Making
Figure 2 for Decomposing Counterfactual Explanations for Consequential Decision Making
Figure 3 for Decomposing Counterfactual Explanations for Consequential Decision Making
Figure 4 for Decomposing Counterfactual Explanations for Consequential Decision Making
Viaarxiv icon

I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization

Add code
Bookmark button
Alert button
Nov 01, 2022
Tobias Leemann, Martin Pawelczyk, Christian Thomas Eberle, Gjergji Kasneci

Figure 1 for I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization
Figure 2 for I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization
Figure 3 for I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization
Figure 4 for I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization
Viaarxiv icon

Language Models are Realistic Tabular Data Generators

Add code
Bookmark button
Alert button
Oct 12, 2022
Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci

Figure 1 for Language Models are Realistic Tabular Data Generators
Figure 2 for Language Models are Realistic Tabular Data Generators
Figure 3 for Language Models are Realistic Tabular Data Generators
Figure 4 for Language Models are Realistic Tabular Data Generators
Viaarxiv icon

On the Trade-Off between Actionable Explanations and the Right to be Forgotten

Add code
Bookmark button
Alert button
Aug 30, 2022
Martin Pawelczyk, Tobias Leemann, Asia Biega, Gjergji Kasneci

Figure 1 for On the Trade-Off between Actionable Explanations and the Right to be Forgotten
Figure 2 for On the Trade-Off between Actionable Explanations and the Right to be Forgotten
Figure 3 for On the Trade-Off between Actionable Explanations and the Right to be Forgotten
Figure 4 for On the Trade-Off between Actionable Explanations and the Right to be Forgotten
Viaarxiv icon

OpenXAI: Towards a Transparent Evaluation of Model Explanations

Add code
Bookmark button
Alert button
Jun 22, 2022
Chirag Agarwal, Eshika Saxena, Satyapriya Krishna, Martin Pawelczyk, Nari Johnson, Isha Puri, Marinka Zitnik, Himabindu Lakkaraju

Figure 1 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 2 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 3 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Figure 4 for OpenXAI: Towards a Transparent Evaluation of Model Explanations
Viaarxiv icon