Picture for Masashi Sugiyama

Masashi Sugiyama

Tokyo Institute of Technology

Unlearning with Control: Assessing Real-world Utility for Large Language Model Unlearning

Add code
Jun 13, 2024
Viaarxiv icon

Decoupling the Class Label and the Target Concept in Machine Unlearning

Add code
Jun 12, 2024
Viaarxiv icon

Slight Corruption in Pre-training Data Makes Better Diffusion Models

Add code
May 30, 2024
Figure 1 for Slight Corruption in Pre-training Data Makes Better Diffusion Models
Figure 2 for Slight Corruption in Pre-training Data Makes Better Diffusion Models
Figure 3 for Slight Corruption in Pre-training Data Makes Better Diffusion Models
Figure 4 for Slight Corruption in Pre-training Data Makes Better Diffusion Models
Viaarxiv icon

Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization

Add code
May 29, 2024
Viaarxiv icon

Multi-Player Approaches for Dueling Bandits

Add code
May 25, 2024
Viaarxiv icon

Offline Reinforcement Learning from Datasets with Structured Non-Stationarity

Add code
May 23, 2024
Viaarxiv icon

Balancing Similarity and Complementarity for Federated Learning

Add code
May 16, 2024
Viaarxiv icon

Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains

Add code
Apr 11, 2024
Figure 1 for Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains
Figure 2 for Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains
Figure 3 for Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains
Figure 4 for Leveraging Domain-Unlabeled Data in Offline Reinforcement Learning across Two Domains
Viaarxiv icon

Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training

Add code
Apr 09, 2024
Viaarxiv icon

Reinforcement Learning with Options and State Representation

Add code
Mar 25, 2024
Figure 1 for Reinforcement Learning with Options and State Representation
Figure 2 for Reinforcement Learning with Options and State Representation
Figure 3 for Reinforcement Learning with Options and State Representation
Figure 4 for Reinforcement Learning with Options and State Representation
Viaarxiv icon