Alert button
Picture for Hyeonwoo Noh

Hyeonwoo Noh

Alert button

Asymmetric self-play for automatic goal discovery in robotic manipulation

Jan 13, 2021
OpenAI OpenAI, Matthias Plappert, Raul Sampedro, Tao Xu, Ilge Akkaya, Vineet Kosaraju, Peter Welinder, Ruben D'Sa, Arthur Petron, Henrique Ponde de Oliveira Pinto, Alex Paino, Hyeonwoo Noh, Lilian Weng, Qiming Yuan, Casey Chu, Wojciech Zaremba

Figure 1 for Asymmetric self-play for automatic goal discovery in robotic manipulation
Figure 2 for Asymmetric self-play for automatic goal discovery in robotic manipulation
Figure 3 for Asymmetric self-play for automatic goal discovery in robotic manipulation
Figure 4 for Asymmetric self-play for automatic goal discovery in robotic manipulation
Viaarxiv icon

Behavior Priors for Efficient Reinforcement Learning

Oct 27, 2020
Dhruva Tirumala, Alexandre Galashov, Hyeonwoo Noh, Leonard Hasenclever, Razvan Pascanu, Jonathan Schwarz, Guillaume Desjardins, Wojciech Marian Czarnecki, Arun Ahuja, Yee Whye Teh, Nicolas Heess

Figure 1 for Behavior Priors for Efficient Reinforcement Learning
Figure 2 for Behavior Priors for Efficient Reinforcement Learning
Figure 3 for Behavior Priors for Efficient Reinforcement Learning
Figure 4 for Behavior Priors for Efficient Reinforcement Learning
Viaarxiv icon

Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning

Dec 04, 2019
Ilchae Jung, Kihyun You, Hyeonwoo Noh, Minsu Cho, Bohyung Han

Figure 1 for Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning
Figure 2 for Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning
Figure 3 for Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning
Figure 4 for Real-Time Object Tracking via Meta-Learning: Efficient Model Adaptation and One-Shot Channel Pruning
Viaarxiv icon

Exploiting Hierarchy for Learning and Transfer in KL-regularized RL

Mar 18, 2019
Dhruva Tirumala, Hyeonwoo Noh, Alexandre Galashov, Leonard Hasenclever, Arun Ahuja, Greg Wayne, Razvan Pascanu, Yee Whye Teh, Nicolas Heess

Figure 1 for Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Figure 2 for Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Figure 3 for Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Figure 4 for Exploiting Hierarchy for Learning and Transfer in KL-regularized RL
Viaarxiv icon

Transfer Learning via Unsupervised Task Discovery for Visual Question Answering

Oct 03, 2018
Hyeonwoo Noh, Taehoon Kim, Jonghwan Mun, Bohyung Han

Figure 1 for Transfer Learning via Unsupervised Task Discovery for Visual Question Answering
Figure 2 for Transfer Learning via Unsupervised Task Discovery for Visual Question Answering
Figure 3 for Transfer Learning via Unsupervised Task Discovery for Visual Question Answering
Figure 4 for Transfer Learning via Unsupervised Task Discovery for Visual Question Answering
Viaarxiv icon

Large-Scale Image Retrieval with Attentive Deep Local Features

Feb 03, 2018
Hyeonwoo Noh, Andre Araujo, Jack Sim, Tobias Weyand, Bohyung Han

Figure 1 for Large-Scale Image Retrieval with Attentive Deep Local Features
Figure 2 for Large-Scale Image Retrieval with Attentive Deep Local Features
Figure 3 for Large-Scale Image Retrieval with Attentive Deep Local Features
Figure 4 for Large-Scale Image Retrieval with Attentive Deep Local Features
Viaarxiv icon

Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization

Nov 09, 2017
Hyeonwoo Noh, Tackgeun You, Jonghwan Mun, Bohyung Han

Figure 1 for Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Figure 2 for Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Figure 3 for Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Figure 4 for Regularizing Deep Neural Networks by Noise: Its Interpretation and Optimization
Viaarxiv icon

Training Recurrent Answering Units with Joint Loss Minimization for VQA

Sep 30, 2016
Hyeonwoo Noh, Bohyung Han

Figure 1 for Training Recurrent Answering Units with Joint Loss Minimization for VQA
Figure 2 for Training Recurrent Answering Units with Joint Loss Minimization for VQA
Figure 3 for Training Recurrent Answering Units with Joint Loss Minimization for VQA
Figure 4 for Training Recurrent Answering Units with Joint Loss Minimization for VQA
Viaarxiv icon