Alert button
Picture for Seyed Kamyar Seyed Ghasemipour

Seyed Kamyar Seyed Ghasemipour

Alert button

Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning

Mar 27, 2023
Satoshi Kataoka, Youngseog Chung, Seyed Kamyar Seyed Ghasemipour, Pannag Sanketi, Shixiang Shane Gu, Igor Mordatch

Figure 1 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 2 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 3 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Figure 4 for Bi-Manual Block Assembly via Sim-to-Real Reinforcement Learning
Viaarxiv icon

Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters

May 27, 2022
Seyed Kamyar Seyed Ghasemipour, Shixiang Shane Gu, Ofir Nachum

Figure 1 for Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Figure 2 for Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Figure 3 for Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Figure 4 for Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters
Viaarxiv icon

Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding

May 23, 2022
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi

Figure 1 for Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Figure 2 for Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Figure 3 for Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Figure 4 for Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Viaarxiv icon

Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning

Apr 12, 2022
Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Byron David, Shixiang Shane Gu, Satoshi Kataoka, Igor Mordatch

Figure 1 for Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning
Figure 2 for Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning
Figure 3 for Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning
Figure 4 for Blocks Assemble! Learning to Assemble with Large-Scale Structured Reinforcement Learning
Viaarxiv icon

Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement Learning

Mar 15, 2022
Satoshi Kataoka, Seyed Kamyar Seyed Ghasemipour, Daniel Freeman, Igor Mordatch

Figure 1 for Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement Learning
Figure 2 for Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement Learning
Figure 3 for Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement Learning
Figure 4 for Bi-Manual Manipulation and Attachment via Sim-to-Real Reinforcement Learning
Viaarxiv icon

Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization

Oct 10, 2021
Shixiang Shane Gu, Manfred Diaz, Daniel C. Freeman, Hiroki Furuta, Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin Coumans, Olivier Bachem

Figure 1 for Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization
Figure 2 for Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization
Figure 3 for Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization
Figure 4 for Braxlines: Fast and Interactive Toolkit for RL-driven Behavior Engineering beyond Reward Maximization
Viaarxiv icon

EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL

Jul 21, 2020
Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, Shixiang Shane Gu

Figure 1 for EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Figure 2 for EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Figure 3 for EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Figure 4 for EMaQ: Expected-Max Q-Learning Operator for Simple Yet Effective Offline and Online RL
Viaarxiv icon

A Divergence Minimization Perspective on Imitation Learning Methods

Nov 06, 2019
Seyed Kamyar Seyed Ghasemipour, Richard Zemel, Shixiang Gu

Figure 1 for A Divergence Minimization Perspective on Imitation Learning Methods
Figure 2 for A Divergence Minimization Perspective on Imitation Learning Methods
Figure 3 for A Divergence Minimization Perspective on Imitation Learning Methods
Figure 4 for A Divergence Minimization Perspective on Imitation Learning Methods
Viaarxiv icon