Picture for Yuanshun Yao

Yuanshun Yao

Rethinking Machine Unlearning for Large Language Models

Add code
Feb 15, 2024
Viaarxiv icon

Human-Instruction-Free LLM Self-Alignment with Limited Samples

Add code
Jan 06, 2024
Viaarxiv icon

Large Language Model Unlearning

Add code
Oct 14, 2023
Figure 1 for Large Language Model Unlearning
Figure 2 for Large Language Model Unlearning
Figure 3 for Large Language Model Unlearning
Figure 4 for Large Language Model Unlearning
Viaarxiv icon

Fair Classifiers that Abstain without Harm

Add code
Oct 09, 2023
Viaarxiv icon

Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment

Add code
Aug 10, 2023
Figure 1 for Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Figure 2 for Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Figure 3 for Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Figure 4 for Trustworthy LLMs: a Survey and Guideline for Evaluating Large Language Models' Alignment
Viaarxiv icon

Understanding Unfairness via Training Concept Influence

Add code
Jun 30, 2023
Viaarxiv icon

Label Inference Attack against Split Learning under Regression Setting

Add code
Jan 18, 2023
Figure 1 for Label Inference Attack against Split Learning under Regression Setting
Figure 2 for Label Inference Attack against Split Learning under Regression Setting
Figure 3 for Label Inference Attack against Split Learning under Regression Setting
Figure 4 for Label Inference Attack against Split Learning under Regression Setting
Viaarxiv icon

Learning to Counterfactually Explain Recommendations

Add code
Nov 17, 2022
Viaarxiv icon

Evaluating Fairness Without Sensitive Attributes: A Framework Using Only Auxiliary Models

Add code
Oct 06, 2022
Figure 1 for Evaluating Fairness Without Sensitive Attributes: A Framework Using Only Auxiliary Models
Figure 2 for Evaluating Fairness Without Sensitive Attributes: A Framework Using Only Auxiliary Models
Figure 3 for Evaluating Fairness Without Sensitive Attributes: A Framework Using Only Auxiliary Models
Figure 4 for Evaluating Fairness Without Sensitive Attributes: A Framework Using Only Auxiliary Models
Viaarxiv icon

DPAUC: Differentially Private AUC Computation in Federated Learning

Add code
Aug 25, 2022
Figure 1 for DPAUC: Differentially Private AUC Computation in Federated Learning
Figure 2 for DPAUC: Differentially Private AUC Computation in Federated Learning
Figure 3 for DPAUC: Differentially Private AUC Computation in Federated Learning
Figure 4 for DPAUC: Differentially Private AUC Computation in Federated Learning
Viaarxiv icon