Alert button
Picture for Masashi Sugiyama

Masashi Sugiyama

Alert button

Rethinking Importance Weighting for Deep Learning under Distribution Shift

Add code
Bookmark button
Alert button
Jun 08, 2020
Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama

Figure 1 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 2 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 3 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Figure 4 for Rethinking Importance Weighting for Deep Learning under Distribution Shift
Viaarxiv icon

Calibrated Surrogate Losses for Adversarially Robust Classification

Add code
Bookmark button
Alert button
May 28, 2020
Han Bao, Clayton Scott, Masashi Sugiyama

Figure 1 for Calibrated Surrogate Losses for Adversarially Robust Classification
Figure 2 for Calibrated Surrogate Losses for Adversarially Robust Classification
Figure 3 for Calibrated Surrogate Losses for Adversarially Robust Classification
Figure 4 for Calibrated Surrogate Losses for Adversarially Robust Classification
Viaarxiv icon

Learning from Aggregate Observations

Add code
Bookmark button
Alert button
Apr 14, 2020
Yivan Zhang, Nontawat Charoenphakdee, Zhenguo Wu, Masashi Sugiyama

Figure 1 for Learning from Aggregate Observations
Figure 2 for Learning from Aggregate Observations
Figure 3 for Learning from Aggregate Observations
Figure 4 for Learning from Aggregate Observations
Viaarxiv icon

Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?

Add code
Bookmark button
Alert button
Mar 20, 2020
Jie Luo, Guangshen Ma, Sarah Frisken, Parikshit Juvekar, Nazim Haouchine, Zhe Xu, Yiming Xiao, Alexandra Golby, Patrick Codd, Masashi Sugiyama, William Wells III

Figure 1 for Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?
Figure 2 for Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?
Figure 3 for Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?
Figure 4 for Do Public Datasets Assure Unbiased Comparisons for Registration Evaluation?
Viaarxiv icon

Time-varying Gaussian Process Bandit Optimization with Non-constant Evaluation Time

Add code
Bookmark button
Alert button
Mar 11, 2020
Hideaki Imamura, Nontawat Charoenphakdee, Futoshi Futami, Issei Sato, Junya Honda, Masashi Sugiyama

Figure 1 for Time-varying Gaussian Process Bandit Optimization with Non-constant Evaluation Time
Figure 2 for Time-varying Gaussian Process Bandit Optimization with Non-constant Evaluation Time
Figure 3 for Time-varying Gaussian Process Bandit Optimization with Non-constant Evaluation Time
Viaarxiv icon

A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast

Add code
Bookmark button
Alert button
Mar 05, 2020
Zeke Xie, Issei Sato, Masashi Sugiyama

Figure 1 for A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast
Figure 2 for A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast
Figure 3 for A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast
Figure 4 for A Diffusion Theory for Deep Learning Dynamics: Stochastic Gradient Descent Escapes From Sharp Minima Exponentially Fast
Viaarxiv icon

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

Add code
Bookmark button
Alert button
Feb 26, 2020
Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

Figure 1 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 2 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 3 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Figure 4 for Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Viaarxiv icon

Do We Need Zero Training Loss After Achieving Zero Training Error?

Add code
Bookmark button
Alert button
Feb 20, 2020
Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

Figure 1 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 2 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 3 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Figure 4 for Do We Need Zero Training Loss After Achieving Zero Training Error?
Viaarxiv icon