Alert button
Picture for Masashi Sugiyama

Masashi Sugiyama

Alert button

Understanding the Interaction of Adversarial Training with Noisy Labels

Add code
Bookmark button
Alert button
Feb 09, 2021
Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

Figure 1 for Understanding the Interaction of Adversarial Training with Noisy Labels
Figure 2 for Understanding the Interaction of Adversarial Training with Noisy Labels
Figure 3 for Understanding the Interaction of Adversarial Training with Noisy Labels
Figure 4 for Understanding the Interaction of Adversarial Training with Noisy Labels
Viaarxiv icon

Learning Diverse-Structured Networks for Adversarial Robustness

Add code
Bookmark button
Alert button
Feb 08, 2021
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

Figure 1 for Learning Diverse-Structured Networks for Adversarial Robustness
Figure 2 for Learning Diverse-Structured Networks for Adversarial Robustness
Figure 3 for Learning Diverse-Structured Networks for Adversarial Robustness
Figure 4 for Learning Diverse-Structured Networks for Adversarial Robustness
Viaarxiv icon

Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization

Add code
Bookmark button
Alert button
Feb 04, 2021
Yivan Zhang, Gang Niu, Masashi Sugiyama

Figure 1 for Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
Figure 2 for Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
Figure 3 for Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
Figure 4 for Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization
Viaarxiv icon

Provably End-to-end Label-Noise Learning without Anchor Points

Add code
Bookmark button
Alert button
Feb 04, 2021
Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

Figure 1 for Provably End-to-end Label-Noise Learning without Anchor Points
Figure 2 for Provably End-to-end Label-Noise Learning without Anchor Points
Figure 3 for Provably End-to-end Label-Noise Learning without Anchor Points
Figure 4 for Provably End-to-end Label-Noise Learning without Anchor Points
Viaarxiv icon

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

Add code
Bookmark button
Alert button
Feb 01, 2021
Shida Lei, Nan Lu, Gang Niu, Issei Sato, Masashi Sugiyama

Figure 1 for Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
Figure 2 for Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
Figure 3 for Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
Figure 4 for Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification
Viaarxiv icon

Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics

Add code
Bookmark button
Alert button
Jan 19, 2021
Masato Ishii, Masashi Sugiyama

Figure 1 for Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics
Figure 2 for Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics
Figure 3 for Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics
Figure 4 for Source-free Domain Adaptation via Distributional Alignment by Matching Batch Normalization Statistics
Viaarxiv icon

A Symmetric Loss Perspective of Reliable Machine Learning

Add code
Bookmark button
Alert button
Jan 05, 2021
Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama

Figure 1 for A Symmetric Loss Perspective of Reliable Machine Learning
Figure 2 for A Symmetric Loss Perspective of Reliable Machine Learning
Figure 3 for A Symmetric Loss Perspective of Reliable Machine Learning
Figure 4 for A Symmetric Loss Perspective of Reliable Machine Learning
Viaarxiv icon

Combinatorial Pure Exploration with Full-bandit Feedback and Beyond: Solving Combinatorial Optimization under Uncertainty with Limited Observation

Add code
Bookmark button
Alert button
Dec 31, 2020
Yuko Kuroki, Junya Honda, Masashi Sugiyama

Figure 1 for Combinatorial Pure Exploration with Full-bandit Feedback and Beyond: Solving Combinatorial Optimization under Uncertainty with Limited Observation
Viaarxiv icon

On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective

Add code
Bookmark button
Alert button
Dec 14, 2020
Nontawat Charoenphakdee, Jayakorn Vongkulbhisal, Nuttapong Chairatanakul, Masashi Sugiyama

Figure 1 for On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective
Figure 2 for On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective
Figure 3 for On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective
Figure 4 for On Focal Loss for Class-Posterior Probability Estimation: A Theoretical Perspective
Viaarxiv icon