Alert button
Picture for Takahisa Imagawa

Takahisa Imagawa

Alert button

Unsupervised Discovery of Continuous Skills on a Sphere

Add code
Bookmark button
Alert button
May 25, 2023
Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

Figure 1 for Unsupervised Discovery of Continuous Skills on a Sphere
Figure 2 for Unsupervised Discovery of Continuous Skills on a Sphere
Figure 3 for Unsupervised Discovery of Continuous Skills on a Sphere
Figure 4 for Unsupervised Discovery of Continuous Skills on a Sphere
Viaarxiv icon

Dropout Q-Functions for Doubly Efficient Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 05, 2021
Takuya Hiraoka, Takahisa Imagawa, Taisei Hashimoto, Takashi Onishi, Yoshimasa Tsuruoka

Figure 1 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 2 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 3 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Figure 4 for Dropout Q-Functions for Doubly Efficient Reinforcement Learning
Viaarxiv icon

Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces

Add code
Bookmark button
Alert button
Jan 06, 2021
Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

Figure 1 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 2 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 3 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Figure 4 for Off-Policy Meta-Reinforcement Learning Based on Feature Embedding Spaces
Viaarxiv icon

Meta-Model-Based Meta-Policy Optimization

Add code
Bookmark button
Alert button
Jun 05, 2020
Takuya Hiraoka, Takahisa Imagawa, Voot Tangkaratt, Takayuki Osa, Takashi Onishi, Yoshimasa Tsuruoka

Figure 1 for Meta-Model-Based Meta-Policy Optimization
Figure 2 for Meta-Model-Based Meta-Policy Optimization
Figure 3 for Meta-Model-Based Meta-Policy Optimization
Figure 4 for Meta-Model-Based Meta-Policy Optimization
Viaarxiv icon

Optimistic Proximal Policy Optimization

Add code
Bookmark button
Alert button
Jun 25, 2019
Takahisa Imagawa, Takuya Hiraoka, Yoshimasa Tsuruoka

Figure 1 for Optimistic Proximal Policy Optimization
Figure 2 for Optimistic Proximal Policy Optimization
Figure 3 for Optimistic Proximal Policy Optimization
Figure 4 for Optimistic Proximal Policy Optimization
Viaarxiv icon

Learning Robust Options by Conditional Value at Risk Optimization

Add code
Bookmark button
Alert button
Jun 11, 2019
Takuya Hiraoka, Takahisa Imagawa, Tatsuya Mori, Takashi Onishi, Yoshimasa Tsuruoka

Figure 1 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 2 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 3 for Learning Robust Options by Conditional Value at Risk Optimization
Figure 4 for Learning Robust Options by Conditional Value at Risk Optimization
Viaarxiv icon

Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients

Add code
Bookmark button
Alert button
Sep 29, 2018
Takuya Hiraoka, Takashi Onishi, Takahisa Imagawa, Yoshimasa Tsuruoka

Figure 1 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 2 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 3 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Figure 4 for Refining Manually-Designed Symbol Grounding and High-Level Planning by Policy Gradients
Viaarxiv icon