Alert button
Picture for Nathan Kallus

Nathan Kallus

Alert button

Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond

Dec 30, 2019
Nathan Kallus, Xiaojie Mao, Masatoshi Uehara

Figure 1 for Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond
Figure 2 for Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond
Figure 3 for Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond
Figure 4 for Localized Debiased Machine Learning: Efficient Estimation of Quantile Treatment Effects, Conditional Value at Risk, and Beyond
Viaarxiv icon

Kernel Optimal Orthogonality Weighting: A Balancing Approach to Estimating Effects of Continuous Treatments

Oct 26, 2019
Nathan Kallus, Michele Santacatterina

Figure 1 for Kernel Optimal Orthogonality Weighting: A Balancing Approach to Estimating Effects of Continuous Treatments
Figure 2 for Kernel Optimal Orthogonality Weighting: A Balancing Approach to Estimating Effects of Continuous Treatments
Figure 3 for Kernel Optimal Orthogonality Weighting: A Balancing Approach to Estimating Effects of Continuous Treatments
Figure 4 for Kernel Optimal Orthogonality Weighting: A Balancing Approach to Estimating Effects of Continuous Treatments
Viaarxiv icon

Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes

Sep 12, 2019
Nathan Kallus, Masatoshi Uehara

Figure 1 for Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes
Figure 2 for Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes
Figure 3 for Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes
Figure 4 for Efficiently Breaking the Curse of Horizon: Double Reinforcement Learning in Infinite-Horizon Processes
Viaarxiv icon

Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes

Sep 05, 2019
Yichun Hu, Nathan Kallus, Xiaojie Mao

Figure 1 for Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
Figure 2 for Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
Figure 3 for Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
Figure 4 for Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes
Viaarxiv icon

Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes

Aug 22, 2019
Nathan Kallus, Masatoshi Uehara

Figure 1 for Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Figure 2 for Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Figure 3 for Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Figure 4 for Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes
Viaarxiv icon

Optimal Estimation of Generalized Average Treatment Effects using Kernel Optimal Matching

Aug 13, 2019
Nathan Kallus, Michele Santacatterina

Figure 1 for Optimal Estimation of Generalized Average Treatment Effects using Kernel Optimal Matching
Figure 2 for Optimal Estimation of Generalized Average Treatment Effects using Kernel Optimal Matching
Figure 3 for Optimal Estimation of Generalized Average Treatment Effects using Kernel Optimal Matching
Figure 4 for Optimal Estimation of Generalized Average Treatment Effects using Kernel Optimal Matching
Viaarxiv icon

Policy Evaluation with Latent Confounders via Optimal Balance

Aug 06, 2019
Andrew Bennett, Nathan Kallus

Figure 1 for Policy Evaluation with Latent Confounders via Optimal Balance
Figure 2 for Policy Evaluation with Latent Confounders via Optimal Balance
Figure 3 for Policy Evaluation with Latent Confounders via Optimal Balance
Figure 4 for Policy Evaluation with Latent Confounders via Optimal Balance
Viaarxiv icon

More Efficient Policy Learning via Optimal Retargeting

Jun 20, 2019
Nathan Kallus

Figure 1 for More Efficient Policy Learning via Optimal Retargeting
Figure 2 for More Efficient Policy Learning via Optimal Retargeting
Figure 3 for More Efficient Policy Learning via Optimal Retargeting
Figure 4 for More Efficient Policy Learning via Optimal Retargeting
Viaarxiv icon

Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning

Jun 09, 2019
Nathan Kallus, Masatoshi Uehara

Figure 1 for Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning
Figure 2 for Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning
Figure 3 for Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning
Figure 4 for Intrinsically Efficient, Stable, and Bounded Off-Policy Evaluation for Reinforcement Learning
Viaarxiv icon

Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds

Jun 04, 2019
Nathan Kallus, Angela Zhou

Figure 1 for Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
Figure 2 for Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
Figure 3 for Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
Figure 4 for Assessing Disparate Impacts of Personalized Interventions: Identifiability and Bounds
Viaarxiv icon