Alert button
Picture for Emanuel Todorov

Emanuel Todorov

Alert button

Computing the Newton-step faster than Hessian accumulation

Add code
Bookmark button
Alert button
Aug 02, 2021
Akshay Srinivasan, Emanuel Todorov

Figure 1 for Computing the Newton-step faster than Hessian accumulation
Figure 2 for Computing the Newton-step faster than Hessian accumulation
Figure 3 for Computing the Newton-step faster than Hessian accumulation
Viaarxiv icon

Lyceum: An efficient and scalable ecosystem for robot learning

Add code
Bookmark button
Alert button
Jan 21, 2020
Colin Summers, Kendall Lowrey, Aravind Rajeswaran, Siddhartha Srinivasa, Emanuel Todorov

Figure 1 for Lyceum: An efficient and scalable ecosystem for robot learning
Figure 2 for Lyceum: An efficient and scalable ecosystem for robot learning
Figure 3 for Lyceum: An efficient and scalable ecosystem for robot learning
Figure 4 for Lyceum: An efficient and scalable ecosystem for robot learning
Viaarxiv icon

Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control

Add code
Bookmark button
Alert button
Jan 28, 2019
Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch

Figure 1 for Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Figure 2 for Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Figure 3 for Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Figure 4 for Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
Viaarxiv icon

Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations

Add code
Bookmark button
Alert button
Jun 26, 2018
Aravind Rajeswaran, Vikash Kumar, Abhishek Gupta, Giulia Vezzani, John Schulman, Emanuel Todorov, Sergey Levine

Figure 1 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 2 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 3 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Figure 4 for Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Viaarxiv icon

Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system

Add code
Bookmark button
Alert button
Mar 28, 2018
Kendall Lowrey, Svetoslav Kolev, Jeremy Dao, Aravind Rajeswaran, Emanuel Todorov

Figure 1 for Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Figure 2 for Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Figure 3 for Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Figure 4 for Reinforcement learning for non-prehensile manipulation: Transfer from simulation to physical system
Viaarxiv icon

Towards Generalization and Simplicity in Continuous Control

Add code
Bookmark button
Alert button
Mar 20, 2018
Aravind Rajeswaran, Kendall Lowrey, Emanuel Todorov, Sham Kakade

Figure 1 for Towards Generalization and Simplicity in Continuous Control
Figure 2 for Towards Generalization and Simplicity in Continuous Control
Figure 3 for Towards Generalization and Simplicity in Continuous Control
Figure 4 for Towards Generalization and Simplicity in Continuous Control
Viaarxiv icon

Graphical Newton

Add code
Bookmark button
Alert button
Oct 08, 2017
Akshay Srinivasan, Emanuel Todorov

Figure 1 for Graphical Newton
Figure 2 for Graphical Newton
Figure 3 for Graphical Newton
Figure 4 for Graphical Newton
Viaarxiv icon

Learning Dexterous Manipulation Policies from Experience and Imitation

Add code
Bookmark button
Alert button
Nov 15, 2016
Vikash Kumar, Abhishek Gupta, Emanuel Todorov, Sergey Levine

Figure 1 for Learning Dexterous Manipulation Policies from Experience and Imitation
Figure 2 for Learning Dexterous Manipulation Policies from Experience and Imitation
Figure 3 for Learning Dexterous Manipulation Policies from Experience and Imitation
Figure 4 for Learning Dexterous Manipulation Policies from Experience and Imitation
Viaarxiv icon

Universal Convexification via Risk-Aversion

Add code
Bookmark button
Alert button
Jun 03, 2014
Krishnamurthy Dvijotham, Maryam Fazel, Emanuel Todorov

Figure 1 for Universal Convexification via Risk-Aversion
Figure 2 for Universal Convexification via Risk-Aversion
Figure 3 for Universal Convexification via Risk-Aversion
Viaarxiv icon