Picture for Gang Niu

Gang Niu

Tokyo Institute of Technology

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels

Add code
Jul 07, 2020
Figure 1 for Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
Figure 2 for Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
Figure 3 for Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
Figure 4 for Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels
Viaarxiv icon

Parts-dependent Label Noise: Towards Instance-dependent Label Noise

Add code
Jun 14, 2020
Figure 1 for Parts-dependent Label Noise: Towards Instance-dependent Label Noise
Figure 2 for Parts-dependent Label Noise: Towards Instance-dependent Label Noise
Figure 3 for Parts-dependent Label Noise: Towards Instance-dependent Label Noise
Figure 4 for Parts-dependent Label Noise: Towards Instance-dependent Label Noise
Viaarxiv icon

Class2Simi: A New Perspective on Learning with Label Noise

Add code
Jun 14, 2020
Figure 1 for Class2Simi: A New Perspective on Learning with Label Noise
Figure 2 for Class2Simi: A New Perspective on Learning with Label Noise
Figure 3 for Class2Simi: A New Perspective on Learning with Label Noise
Figure 4 for Class2Simi: A New Perspective on Learning with Label Noise
Viaarxiv icon

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

Add code
Jun 14, 2020
Figure 1 for Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
Figure 2 for Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
Figure 3 for Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
Figure 4 for Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning
Viaarxiv icon

Rethinking Importance Weighting for Deep Learning under Distribution Shift

Add code
Jun 08, 2020
Figure 1 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 2 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 3 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 4 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Viaarxiv icon

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Add code
Feb 26, 2020
Figure 1 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 2 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 3 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 4 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Viaarxiv icon

Do We Need Zero Training Loss After Achieving Zero Training Error?

Add code
Feb 20, 2020
Figure 1 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 2 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 3 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 4 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Viaarxiv icon

Progressive Identification of True Labels for Partial-Label Learning

Add code
Feb 19, 2020
Figure 1 for Progressive Identification of True Labels for Partial-Label Learning
Figure 2 for Progressive Identification of True Labels for Partial-Label Learning
Figure 3 for Progressive Identification of True Labels for Partial-Label Learning
Figure 4 for Progressive Identification of True Labels for Partial-Label Learning
Viaarxiv icon

Multi-Class Classification from Noisy-Similarity-Labeled Data

Add code
Feb 16, 2020
Figure 1 for Multi-Class Classification from Noisy-Similarity-Labeled Data
Figure 2 for Multi-Class Classification from Noisy-Similarity-Labeled Data
Figure 3 for Multi-Class Classification from Noisy-Similarity-Labeled Data
Figure 4 for Multi-Class Classification from Noisy-Similarity-Labeled Data
Viaarxiv icon

Towards Mixture Proportion Estimation without Irreducibility

Add code
Feb 10, 2020
Figure 1 for Towards Mixture Proportion Estimation without Irreducibility
Figure 2 for Towards Mixture Proportion Estimation without Irreducibility
Figure 3 for Towards Mixture Proportion Estimation without Irreducibility
Figure 4 for Towards Mixture Proportion Estimation without Irreducibility
Viaarxiv icon