Picture for Lalana Kagal

Lalana Kagal

Massachusetts Institute of Technology

Investigating Model Editing for Unlearning in Large Language Models

Add code
Dec 23, 2025
Figure 1 for Investigating Model Editing for Unlearning in Large Language Models
Figure 2 for Investigating Model Editing for Unlearning in Large Language Models
Figure 3 for Investigating Model Editing for Unlearning in Large Language Models
Figure 4 for Investigating Model Editing for Unlearning in Large Language Models
Viaarxiv icon

Towards Resource Efficient and Interpretable Bias Mitigation in Large Language Models

Add code
Dec 02, 2024
Viaarxiv icon

Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners

Add code
Jun 17, 2021
Figure 1 for Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners
Figure 2 for Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners
Figure 3 for Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners
Figure 4 for Multi-VFL: A Vertical Federated Learning System for Multiple Data and Label Owners
Viaarxiv icon

Bias-Free FedGAN

Add code
Mar 17, 2021
Figure 1 for Bias-Free FedGAN
Figure 2 for Bias-Free FedGAN
Viaarxiv icon

Investigating Bias in Image Classification using Model Explanations

Add code
Dec 10, 2020
Figure 1 for Investigating Bias in Image Classification using Model Explanations
Figure 2 for Investigating Bias in Image Classification using Model Explanations
Figure 3 for Investigating Bias in Image Classification using Model Explanations
Figure 4 for Investigating Bias in Image Classification using Model Explanations
Viaarxiv icon

DPD-InfoGAN: Differentially Private Distributed InfoGAN

Add code
Oct 24, 2020
Figure 1 for DPD-InfoGAN: Differentially Private Distributed InfoGAN
Figure 2 for DPD-InfoGAN: Differentially Private Distributed InfoGAN
Figure 3 for DPD-InfoGAN: Differentially Private Distributed InfoGAN
Figure 4 for DPD-InfoGAN: Differentially Private Distributed InfoGAN
Viaarxiv icon

BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning

Add code
Jul 08, 2020
Figure 1 for BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
Figure 2 for BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
Figure 3 for BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
Figure 4 for BlockFLow: An Accountable and Privacy-Preserving Solution for Federated Learning
Viaarxiv icon

PrivacyFL: A simulator for privacy-preserving and secure federated learning

Add code
Feb 19, 2020
Figure 1 for PrivacyFL: A simulator for privacy-preserving and secure federated learning
Figure 2 for PrivacyFL: A simulator for privacy-preserving and secure federated learning
Figure 3 for PrivacyFL: A simulator for privacy-preserving and secure federated learning
Figure 4 for PrivacyFL: A simulator for privacy-preserving and secure federated learning
Viaarxiv icon

Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning

Add code
Jun 04, 2018
Figure 1 for Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
Figure 2 for Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
Viaarxiv icon

Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models

Add code
Nov 15, 2016
Figure 1 for Iterative Orthogonal Feature Projection for Diagnosing Bias in Black-Box Models
Viaarxiv icon