Alert button
Picture for Ikko Yamane

Ikko Yamane

Alert button

Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling

Add code
Bookmark button
Alert button
Dec 15, 2023
François Portier, Lionel Truquet, Ikko Yamane

Figure 1 for Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling
Figure 2 for Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling
Figure 3 for Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling
Figure 4 for Scalable and hyper-parameter-free non-parametric covariate shift adaptation with conditional sampling
Viaarxiv icon

Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification

Add code
Bookmark button
Alert button
Feb 01, 2022
Takashi Ishida, Ikko Yamane, Nontawat Charoenphakdee, Gang Niu, Masashi Sugiyama

Figure 1 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 2 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 3 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Figure 4 for Is the Performance of My Deep Network Too Good to Be True? A Direct Approach to Estimating the Bayes Error in Binary Classification
Viaarxiv icon

Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences

Add code
Bookmark button
Alert button
Jul 16, 2021
Ikko Yamane, Junya Honda, Florian Yger, Masashi Sugiyama

Figure 1 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 2 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 3 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Figure 4 for Mediated Uncoupled Learning: Learning Functions without Direct Input-output Correspondences
Viaarxiv icon

A One-step Approach to Covariate Shift Adaptation

Add code
Bookmark button
Alert button
Jul 08, 2020
Tianyi Zhang, Ikko Yamane, Nan Lu, Masashi Sugiyama

Figure 1 for A One-step Approach to Covariate Shift Adaptation
Figure 2 for A One-step Approach to Covariate Shift Adaptation
Figure 3 for A One-step Approach to Covariate Shift Adaptation
Viaarxiv icon

Do We Need Zero Training Loss After Achieving Zero Training Error?

Add code
Bookmark button
Alert button
Feb 20, 2020
Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

Figure 1 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 2 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 3 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 4 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Viaarxiv icon

Uplift Modeling from Separate Labels

Add code
Bookmark button
Alert button
Oct 01, 2018
Ikko Yamane, Florian Yger, Jamal Atif, Masashi Sugiyama

Figure 1 for Uplift Modeling from Separate Labels
Figure 2 for Uplift Modeling from Separate Labels
Viaarxiv icon

Regularized Multi-Task Learning for Multi-Dimensional Log-Density Gradient Estimation

Add code
Bookmark button
Alert button
Aug 01, 2015
Ikko Yamane, Hiroaki Sasaki, Masashi Sugiyama

Viaarxiv icon