Alert button
Picture for Takashi Ishida

Takashi Ishida

Alert button

Learning with Complementary Labels Revisited: A Consistent Approach via Negative-Unlabeled Learning

Add code
Bookmark button
Alert button
Nov 27, 2023
Wei Wang, Takashi Ishida, Yu-Jie Zhang, Gang Niu, Masashi Sugiyama

Viaarxiv icon

Flooding Regularization for Stable Training of Generative Adversarial Networks

Add code
Bookmark button
Alert button
Nov 01, 2023
Iu Yahiro, Takashi Ishida, Naoto Yokoya

Viaarxiv icon

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

Add code
Bookmark button
Alert button
Feb 01, 2022
Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama

Figure 1 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 2 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 3 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 4 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Viaarxiv icon

LocalDrop: A Hybrid Regularization for Deep Neural Networks

Add code
Bookmark button
Alert button
Mar 01, 2021
Ziqing Lu, Chang Xu, Bo Du, Takashi Ishida, Lefei Zhang, Masashi Sugiyama

Figure 1 for LocalDrop: A Hybrid Regularization for Deep Neural Networks
Figure 2 for LocalDrop: A Hybrid Regularization for Deep Neural Networks
Figure 3 for LocalDrop: A Hybrid Regularization for Deep Neural Networks
Figure 4 for LocalDrop: A Hybrid Regularization for Deep Neural Networks
Viaarxiv icon

Do We Need Zero Training Loss After Achieving Zero Training Error?

Add code
Bookmark button
Alert button
Feb 20, 2020
Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

Figure 1 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 2 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 3 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 4 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Viaarxiv icon

Complementary-Label Learning for Arbitrary Losses and Models

Add code
Bookmark button
Alert button
Oct 10, 2018
Takashi Ishida, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

Figure 1 for Complementary-Label Learning for Arbitrary Losses and Models
Figure 2 for Complementary-Label Learning for Arbitrary Losses and Models
Figure 3 for Complementary-Label Learning for Arbitrary Losses and Models
Figure 4 for Complementary-Label Learning for Arbitrary Losses and Models
Viaarxiv icon

Binary Classification from Positive-Confidence Data

Add code
Bookmark button
Alert button
Feb 11, 2018
Takashi Ishida, Gang Niu, Masashi Sugiyama

Figure 1 for Binary Classification from Positive-Confidence Data
Figure 2 for Binary Classification from Positive-Confidence Data
Figure 3 for Binary Classification from Positive-Confidence Data
Figure 4 for Binary Classification from Positive-Confidence Data
Viaarxiv icon

Learning from Complementary Labels

Add code
Bookmark button
Alert button
Nov 12, 2017
Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama

Figure 1 for Learning from Complementary Labels
Figure 2 for Learning from Complementary Labels
Figure 3 for Learning from Complementary Labels
Figure 4 for Learning from Complementary Labels
Viaarxiv icon