Picture for Masashi Sugiyama

Masashi Sugiyama

Tokyo Institute of Technology

Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback

Add code
Feb 27, 2019
Figure 1 for Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback
Figure 2 for Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback
Figure 3 for Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback
Figure 4 for Polynomial-time Algorithms for Combinatorial Pure Exploration with Full-bandit Feedback
Viaarxiv icon

An analytic formulation for positive-unlabeled learning via weighted integral probability metric

Add code
Feb 08, 2019
Figure 1 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 2 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 3 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Figure 4 for An analytic formulation for positive-unlabeled learning via weighted integral probability metric
Viaarxiv icon

Online Multiclass Classification Based on Prediction Margin for Partial Feedback

Add code
Feb 04, 2019
Figure 1 for Online Multiclass Classification Based on Prediction Margin for Partial Feedback
Figure 2 for Online Multiclass Classification Based on Prediction Margin for Partial Feedback
Figure 3 for Online Multiclass Classification Based on Prediction Margin for Partial Feedback
Figure 4 for Online Multiclass Classification Based on Prediction Margin for Partial Feedback
Viaarxiv icon

Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization

Add code
Jan 31, 2019
Figure 1 for Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization
Figure 2 for Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization
Figure 3 for Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization
Figure 4 for Semi-Supervised Ordinal Regression Based on Empirical Risk Minimization
Viaarxiv icon

New Tricks for Estimating Gradients of Expectations

Add code
Jan 31, 2019
Figure 1 for New Tricks for Estimating Gradients of Expectations
Figure 2 for New Tricks for Estimating Gradients of Expectations
Figure 3 for New Tricks for Estimating Gradients of Expectations
Figure 4 for New Tricks for Estimating Gradients of Expectations
Viaarxiv icon

On Possibility and Impossibility of Multiclass Classification with Rejection

Add code
Jan 30, 2019
Figure 1 for On Possibility and Impossibility of Multiclass Classification with Rejection
Figure 2 for On Possibility and Impossibility of Multiclass Classification with Rejection
Figure 3 for On Possibility and Impossibility of Multiclass Classification with Rejection
Figure 4 for On Possibility and Impossibility of Multiclass Classification with Rejection
Viaarxiv icon

Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation

Add code
Jan 30, 2019
Figure 1 for Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation
Figure 2 for Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation
Figure 3 for Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation
Figure 4 for Domain Discrepancy Measure Using Complex Models in Unsupervised Domain Adaptation
Viaarxiv icon

Imitation Learning from Imperfect Demonstration

Add code
Jan 30, 2019
Figure 1 for Imitation Learning from Imperfect Demonstration
Figure 2 for Imitation Learning from Imperfect Demonstration
Figure 3 for Imitation Learning from Imperfect Demonstration
Figure 4 for Imitation Learning from Imperfect Demonstration
Viaarxiv icon

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

Add code
Jan 29, 2019
Figure 1 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 2 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 3 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Figure 4 for Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative
Viaarxiv icon

Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis

Add code
Jan 28, 2019
Figure 1 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 2 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 3 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Figure 4 for Normalized Flat Minima: Exploring Scale Invariant Definition of Flat Minima for Neural Networks using PAC-Bayesian Analysis
Viaarxiv icon