Picture for Yoshinobu Kawahara

Yoshinobu Kawahara

Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations

Add code
May 22, 2023
Figure 1 for Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations
Figure 2 for Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations
Figure 3 for Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations
Figure 4 for Adaptive action supervision in reinforcement learning from real-world multi-agent demonstrations
Viaarxiv icon

Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on Decay Rates and/or Frequencies

Add code
Dec 26, 2022
Figure 1 for Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on Decay Rates and/or Frequencies
Figure 2 for Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on Decay Rates and/or Frequencies
Figure 3 for Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on Decay Rates and/or Frequencies
Figure 4 for Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on Decay Rates and/or Frequencies
Viaarxiv icon

Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces

Add code
Aug 16, 2022
Figure 1 for Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces
Figure 2 for Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces
Figure 3 for Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces
Figure 4 for Data-driven End-to-end Learning of Pole Placement Control for Nonlinear Dynamics via Koopman Invariant Subspaces
Viaarxiv icon

Stable Invariant Models via Koopman Spectra

Add code
Jul 15, 2022
Figure 1 for Stable Invariant Models via Koopman Spectra
Figure 2 for Stable Invariant Models via Koopman Spectra
Figure 3 for Stable Invariant Models via Koopman Spectra
Figure 4 for Stable Invariant Models via Koopman Spectra
Viaarxiv icon

Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios

Add code
Jun 04, 2022
Figure 1 for Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios
Figure 2 for Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios
Figure 3 for Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios
Figure 4 for Estimating counterfactual treatment outcomes over time in complex multi-agent scenarios
Viaarxiv icon

Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics

Add code
Nov 02, 2021
Figure 1 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 2 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 3 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Figure 4 for Koopman Q-learning: Offline Reinforcement Learning via Symmetries of Dynamics
Viaarxiv icon

Learning interaction rules from multi-animal trajectories via augmented behavioral models

Add code
Jul 14, 2021
Figure 1 for Learning interaction rules from multi-animal trajectories via augmented behavioral models
Figure 2 for Learning interaction rules from multi-animal trajectories via augmented behavioral models
Figure 3 for Learning interaction rules from multi-animal trajectories via augmented behavioral models
Viaarxiv icon

Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning

Add code
Jun 30, 2021
Figure 1 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 2 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 3 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Figure 4 for Koopman Spectrum Nonlinear Regulator and Provably Efficient Online Learning
Viaarxiv icon

A Quadratic Actor Network for Model-Free Reinforcement Learning

Add code
Mar 11, 2021
Figure 1 for A Quadratic Actor Network for Model-Free Reinforcement Learning
Figure 2 for A Quadratic Actor Network for Model-Free Reinforcement Learning
Figure 3 for A Quadratic Actor Network for Model-Free Reinforcement Learning
Figure 4 for A Quadratic Actor Network for Model-Free Reinforcement Learning
Viaarxiv icon

Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections

Add code
Feb 19, 2021
Figure 1 for Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections
Figure 2 for Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections
Figure 3 for Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections
Figure 4 for Discriminant Dynamic Mode Decomposition for Labeled Spatio-Temporal Data Collections
Viaarxiv icon