Alert button
Picture for Masashi Sugiyama

Masashi Sugiyama

Alert button

Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients

Add code
Bookmark button
Alert button
Apr 07, 2022
Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

Figure 1 for Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients
Figure 2 for Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients
Figure 3 for Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients
Figure 4 for Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients
Viaarxiv icon

On the Effectiveness of Adversarial Training against Backdoor Attacks

Add code
Bookmark button
Alert button
Feb 22, 2022
Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Figure 1 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 2 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 3 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Figure 4 for On the Effectiveness of Adversarial Training against Backdoor Attacks
Viaarxiv icon

Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests

Add code
Bookmark button
Alert button
Feb 07, 2022
Xilie Xu, Jingfeng Zhang, Feng Liu, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 2 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 3 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Figure 4 for Adversarial Attacks and Defense for Non-Parametric Two-Sample Tests
Viaarxiv icon

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

Add code
Bookmark button
Alert button
Feb 01, 2022
Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama

Figure 1 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 2 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 3 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 4 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Viaarxiv icon

Towards Adversarially Robust Deep Image Denoising

Add code
Bookmark button
Alert button
Jan 13, 2022
Hanshu Yan, Jingfeng Zhang, Jiashi Feng, Masashi Sugiyama, Vincent Y. F. Tan

Figure 1 for Towards Adversarially Robust Deep Image Denoising
Figure 2 for Towards Adversarially Robust Deep Image Denoising
Figure 3 for Towards Adversarially Robust Deep Image Denoising
Figure 4 for Towards Adversarially Robust Deep Image Denoising
Viaarxiv icon

Learning with Proper Partial Labels

Add code
Bookmark button
Alert button
Dec 23, 2021
Zhenguo Wu, Masashi Sugiyama

Figure 1 for Learning with Proper Partial Labels
Figure 2 for Learning with Proper Partial Labels
Figure 3 for Learning with Proper Partial Labels
Figure 4 for Learning with Proper Partial Labels
Viaarxiv icon

Rethinking Importance Weighting for Transfer Learning

Add code
Bookmark button
Alert button
Dec 19, 2021
Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama

Figure 1 for Rethinking Importance Weighting for Transfer Learning
Figure 2 for Rethinking Importance Weighting for Transfer Learning
Figure 3 for Rethinking Importance Weighting for Transfer Learning
Figure 4 for Rethinking Importance Weighting for Transfer Learning
Viaarxiv icon

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach

Add code
Bookmark button
Alert button
Sep 29, 2021
Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

Figure 1 for Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
Figure 2 for Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
Figure 3 for Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
Figure 4 for Active Refinement for Multi-Label Learning: A Pseudo-Label Approach
Viaarxiv icon

Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation

Add code
Bookmark button
Alert button
Aug 17, 2021
Shota Nakajima, Masashi Sugiyama

Figure 1 for Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation
Figure 2 for Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation
Figure 3 for Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation
Figure 4 for Positive-Unlabeled Classification under Class-Prior Shift: A Prior-invariant Approach Based on Density Ratio Estimation
Viaarxiv icon

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Add code
Bookmark button
Alert button
Jul 16, 2021
Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama

Figure 1 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 2 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 3 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 4 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Viaarxiv icon