Alert button
Picture for Masashi Sugiyama

Masashi Sugiyama

Alert button

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

Add code
Bookmark button
Alert button
Jan 29, 2019
Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama

Figure 1 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 2 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 3 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 4 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Viaarxiv icon

Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis

Add code
Bookmark button
Alert button
Jan 28, 2019
Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama

Figure 1 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 2 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 3 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 4 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Viaarxiv icon

An analytic formulation for positive-unlabeled learning via weighted integral probability metric

Add code
Bookmark button
Alert button
Jan 28, 2019
Yongchan Kwon, Wonyoung Kim, Masashi Sugiyama, Myunghee Cho Paik

Figure 1 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 2 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 3 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 4 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Viaarxiv icon

On Symmetric Losses for Learning from Corrupted Labels

Add code
Bookmark button
Alert button
Jan 27, 2019
Nontawat Charoenphakdee, Jongyeong Lee, Masashi Sugiyama

Figure 1 for On Symmetric Losses for Learning from Corrupted Labels
Figure 2 for On Symmetric Losses for Learning from Corrupted Labels
Figure 3 for On Symmetric Losses for Learning from Corrupted Labels
Figure 4 for On Symmetric Losses for Learning from Corrupted Labels
Viaarxiv icon

How does Disagreement Help Generalization against Label Corruption?

Add code
Bookmark button
Alert button
Jan 26, 2019
Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama

Figure 1 for How does Disagreement Help Generalization against Label Corruption?
Figure 2 for How does Disagreement Help Generalization against Label Corruption?
Figure 3 for How does Disagreement Help Generalization against Label Corruption?
Figure 4 for How does Disagreement Help Generalization against Label Corruption?
Viaarxiv icon

How Does Disagreement Benefit Co-teaching?

Add code
Bookmark button
Alert button
Jan 14, 2019
Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama

Figure 1 for How Does Disagreement Benefit Co-teaching?
Figure 2 for How Does Disagreement Benefit Co-teaching?
Figure 3 for How Does Disagreement Benefit Co-teaching?
Figure 4 for How Does Disagreement Benefit Co-teaching?
Viaarxiv icon

Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization

Add code
Bookmark button
Alert button
Jan 05, 2019
Takayuki Osa, Voot Tangkaratt, Masashi Sugiyama

Figure 1 for Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Figure 2 for Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Figure 3 for Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Figure 4 for Hierarchical Reinforcement Learning via Advantage-Weighted Information Maximization
Viaarxiv icon

Active Deep Q-learning with Demonstration

Add code
Bookmark button
Alert button
Dec 06, 2018
Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama

Figure 1 for Active Deep Q-learning with Demonstration
Figure 2 for Active Deep Q-learning with Demonstration
Figure 3 for Active Deep Q-learning with Demonstration
Figure 4 for Active Deep Q-learning with Demonstration
Viaarxiv icon

Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks

Add code
Bookmark button
Alert button
Oct 31, 2018
Yusuke Tsuzuku, Issei Sato, Masashi Sugiyama

Figure 1 for Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Figure 2 for Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Figure 3 for Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Figure 4 for Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Viaarxiv icon