Alert button
Picture for Kristian Hartikainen

Kristian Hartikainen

Alert button

Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 26, 2023
Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Markus Wulfmeier, Jan Humplik, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess

Figure 1 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 2 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 3 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 4 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Viaarxiv icon

Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 20, 2022
Sasha Salter, Kristian Hartikainen, Walter Goodwin, Ingmar Posner

Figure 1 for Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning
Figure 2 for Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning
Figure 3 for Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning
Figure 4 for Priors, Hierarchy, and Information Asymmetry for Skill Transfer in Reinforcement Learning
Viaarxiv icon

Bayesian Bellman Operators

Add code
Bookmark button
Alert button
Jun 15, 2021
Matthew Fellows, Kristian Hartikainen, Shimon Whiteson

Figure 1 for Bayesian Bellman Operators
Figure 2 for Bayesian Bellman Operators
Figure 3 for Bayesian Bellman Operators
Figure 4 for Bayesian Bellman Operators
Viaarxiv icon

Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 02, 2020
Luisa Zintgraf, Leo Feng, Maximilian Igl, Kristian Hartikainen, Katja Hofmann, Shimon Whiteson

Figure 1 for Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Figure 2 for Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Figure 3 for Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Figure 4 for Exploration in Approximate Hyper-State Space for Meta Reinforcement Learning
Viaarxiv icon

The Ingredients of Real-World Robotic Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 27, 2020
Henry Zhu, Justin Yu, Abhishek Gupta, Dhruv Shah, Kristian Hartikainen, Avi Singh, Vikash Kumar, Sergey Levine

Figure 1 for The Ingredients of Real-World Robotic Reinforcement Learning
Figure 2 for The Ingredients of Real-World Robotic Reinforcement Learning
Figure 3 for The Ingredients of Real-World Robotic Reinforcement Learning
Figure 4 for The Ingredients of Real-World Robotic Reinforcement Learning
Viaarxiv icon

ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots

Add code
Bookmark button
Alert button
Sep 25, 2019
Michael Ahn, Henry Zhu, Kristian Hartikainen, Hugo Ponte, Abhishek Gupta, Sergey Levine, Vikash Kumar

Figure 1 for ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots
Figure 2 for ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots
Figure 3 for ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots
Figure 4 for ROBEL: Robotics Benchmarks for Learning with Low-Cost Robots
Viaarxiv icon

Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery

Add code
Bookmark button
Alert button
Jul 18, 2019
Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, Sergey Levine

Figure 1 for Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery
Figure 2 for Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery
Figure 3 for Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery
Figure 4 for Dynamical Distance Learning for Unsupervised and Semi-Supervised Skill Discovery
Viaarxiv icon

End-to-End Robotic Reinforcement Learning without Reward Engineering

Add code
Bookmark button
Alert button
May 16, 2019
Avi Singh, Larry Yang, Kristian Hartikainen, Chelsea Finn, Sergey Levine

Figure 1 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 2 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 3 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Figure 4 for End-to-End Robotic Reinforcement Learning without Reward Engineering
Viaarxiv icon