Alert button
Picture for Yusuke Narita

Yusuke Narita

Alert button

Off-Policy Evaluation of Ranking Policies under Diverse User Behavior

Add code
Bookmark button
Alert button
Jun 26, 2023
Haruka Kiyohara, Masatoshi Uehara, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto, Yuta Saito

Figure 1 for Off-Policy Evaluation of Ranking Policies under Diverse User Behavior
Figure 2 for Off-Policy Evaluation of Ranking Policies under Diverse User Behavior
Figure 3 for Off-Policy Evaluation of Ranking Policies under Diverse User Behavior
Figure 4 for Off-Policy Evaluation of Ranking Policies under Diverse User Behavior
Viaarxiv icon

Counterfactual Learning with General Data-generating Policies

Add code
Bookmark button
Alert button
Dec 04, 2022
Yusuke Narita, Kyohei Okumura, Akihiro Shimizu, Kohei Yata

Figure 1 for Counterfactual Learning with General Data-generating Policies
Figure 2 for Counterfactual Learning with General Data-generating Policies
Figure 3 for Counterfactual Learning with General Data-generating Policies
Viaarxiv icon

Policy-Adaptive Estimator Selection for Off-Policy Evaluation

Add code
Bookmark button
Alert button
Nov 25, 2022
Takuma Udagawa, Haruka Kiyohara, Yusuke Narita, Yuta Saito, Kei Tateno

Figure 1 for Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Figure 2 for Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Figure 3 for Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Figure 4 for Policy-Adaptive Estimator Selection for Off-Policy Evaluation
Viaarxiv icon

Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model

Add code
Bookmark button
Alert button
Feb 03, 2022
Haruka Kiyohara, Yuta Saito, Tatsuya Matsuhiro, Yusuke Narita, Nobuyuki Shimizu, Yasuo Yamamoto

Figure 1 for Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
Figure 2 for Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
Figure 3 for Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
Figure 4 for Doubly Robust Off-Policy Evaluation for Ranking Policies under the Cascade Behavior Model
Viaarxiv icon

Evaluating the Robustness of Off-Policy Evaluation

Add code
Bookmark button
Alert button
Aug 31, 2021
Yuta Saito, Takuma Udagawa, Haruka Kiyohara, Kazuki Mogi, Yusuke Narita, Kei Tateno

Figure 1 for Evaluating the Robustness of Off-Policy Evaluation
Figure 2 for Evaluating the Robustness of Off-Policy Evaluation
Figure 3 for Evaluating the Robustness of Off-Policy Evaluation
Figure 4 for Evaluating the Robustness of Off-Policy Evaluation
Viaarxiv icon

Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules

Add code
Bookmark button
Alert button
Apr 26, 2021
Yusuke Narita, Kohei Yata

Figure 1 for Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules
Figure 2 for Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules
Figure 3 for Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules
Figure 4 for Algorithm is Experiment: Machine Learning, Market Design, and Policy Eligibility Rules
Viaarxiv icon

A Large-scale Open Dataset for Bandit Algorithms

Add code
Bookmark button
Alert button
Aug 17, 2020
Yuta Saito, Shunsuke Aihara, Megumi Matsutani, Yusuke Narita

Figure 1 for A Large-scale Open Dataset for Bandit Algorithms
Figure 2 for A Large-scale Open Dataset for Bandit Algorithms
Viaarxiv icon

Safe Counterfactual Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 20, 2020
Yusuke Narita, Shota Yasui, Kohei Yata

Figure 1 for Safe Counterfactual Reinforcement Learning
Figure 2 for Safe Counterfactual Reinforcement Learning
Figure 3 for Safe Counterfactual Reinforcement Learning
Figure 4 for Safe Counterfactual Reinforcement Learning
Viaarxiv icon

Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm

Add code
Bookmark button
Alert button
Feb 13, 2020
Masahiro Kato, Takuya Ishihara, Junya Honda, Yusuke Narita

Figure 1 for Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm
Figure 2 for Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm
Figure 3 for Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm
Figure 4 for Adaptive Experimental Design for Efficient Treatment Effect Estimation: Randomized Allocation via Contextual Bandit Algorithm
Viaarxiv icon

Efficient Counterfactual Learning from Bandit Feedback

Add code
Bookmark button
Alert button
Sep 10, 2018
Yusuke Narita, Shota Yasui, Kohei Yata

Figure 1 for Efficient Counterfactual Learning from Bandit Feedback
Figure 2 for Efficient Counterfactual Learning from Bandit Feedback
Viaarxiv icon