Alert button
Picture for Byoung-Tak Zhang

Byoung-Tak Zhang

Alert button

Neural Collage Transfer: Artistic Reconstruction via Material Manipulation

Nov 03, 2023
Ganghun Lee, Minji Kim, Yunsu Lee, Minsu Lee, Byoung-Tak Zhang

Collage is a creative art form that uses diverse material scraps as a base unit to compose a single image. Although pixel-wise generation techniques can reproduce a target image in collage style, it is not a suitable method due to the solid stroke-by-stroke nature of the collage form. While some previous works for stroke-based rendering produced decent sketches and paintings, collages have received much less attention in research despite their popularity as a style. In this paper, we propose a method for learning to make collages via reinforcement learning without the need for demonstrations or collage artwork data. We design the collage Markov Decision Process (MDP), which allows the agent to handle various materials and propose a model-based soft actor-critic to mitigate the agent's training burden derived from the sophisticated dynamics of collage. Moreover, we devise additional techniques such as active material selection and complexity-based multi-scale collage to handle target images at any size and enhance the results' aesthetics by placing relatively more scraps in areas of high complexity. Experimental results show that the trained agent appropriately selected and pasted materials to regenerate the target image into a collage and obtained a higher evaluation score on content and style than pixel-wise generation methods. Code is available at https://github.com/northadventure/CollageRL.

* ICCV 2023 
Viaarxiv icon

PGA: Personalizing Grasping Agents with Single Human-Robot Interaction

Oct 19, 2023
Junghyun Kim, Gi-Cheon Kang, Jaein Kim, Seoyun Yang, Minjoon Jung, Byoung-Tak Zhang

Language-Conditioned Robotic Grasping (LCRG) aims to develop robots that ground and grasp objects based on natural language instructions. While robots capable of recognizing personal objects like "my wallet" can interact more naturally with non-expert users, current LCRG systems primarily limit robots to understanding only generic expressions. To this end, we introduce a task scenario GraspMine with a novel dataset that aims to locate and grasp personal objects given personal indicators via learning from a single human-robot interaction. To address GraspMine, we propose Personalized Grasping Agent (PGA), that learns personal objects by propagating user-given information through a Reminiscence-a collection of raw images from the user's environment. Specifically, PGA acquires personal object information by a user presenting a personal object with its associated indicator, followed by PGA inspecting the object by rotating it. Based on the acquired information, PGA pseudo-labels objects in the Reminiscence by our proposed label propagation algorithm. Harnessing the information acquired from the interactions and the pseudo-labeled objects in the Reminiscence, PGA adapts the object grounding model to grasp personal objects. Experiments on GraspMine show that PGA significantly outperforms baseline methods both in offline and online settings, signifying its effectiveness and personalization applicability on real-world scenarios. Finally, qualitative analysis shows the effectiveness of PGA through a detailed investigation of results in each phase.

* 7 pages, under review 
Viaarxiv icon

PROGrasp: Pragmatic Human-Robot Communication for Object Grasping

Sep 14, 2023
Gi-Cheon Kang, Junghyun Kim, Jaein Kim, Byoung-Tak Zhang

Interactive Object Grasping (IOG) is the task of identifying and grasping the desired object via human-robot natural language interaction. Current IOG systems assume that a human user initially specifies the target object's category (e.g., bottle). Inspired by pragmatics, where humans often convey their intentions by relying on context to achieve goals, we introduce a new IOG task, Pragmatic-IOG, and the corresponding dataset, Intention-oriented Multi-modal Dialogue (IM-Dial). In our proposed task scenario, an intention-oriented utterance (e.g., "I am thirsty") is initially given to the robot. The robot should then identify the target object by interacting with a human user. Based on the task setup, we propose a new robotic system that can interpret the user's intention and pick up the target object, Pragmatic Object Grasping (PROGrasp). PROGrasp performs Pragmatic-IOG by incorporating modules for visual grounding, question asking, object grasping, and most importantly, answer interpretation for pragmatic inference. Experimental results show that PROGrasp is effective in offline (i.e., target object discovery) and online (i.e., IOG with a physical robot arm) settings.

* 7 pages, 6 figures 
Viaarxiv icon

GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation

Jul 12, 2023
Junghyun Kim, Gi-Cheon Kang, Jaein Kim, Suyeon Shin, Byoung-Tak Zhang

Figure 1 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 2 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 3 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation
Figure 4 for GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation

Language-Guided Robotic Manipulation (LGRM) is a challenging task as it requires a robot to understand human instructions to manipulate everyday objects. Recent approaches in LGRM rely on pre-trained Visual Grounding (VG) models to detect objects without adapting to manipulation environments. This results in a performance drop due to a substantial domain gap between the pre-training and real-world data. A straightforward solution is to collect additional training data, but the cost of human-annotation is extortionate. In this paper, we propose Grounding Vision to Ceaselessly Created Instructions (GVCCI), a lifelong learning framework for LGRM, which continuously learns VG without human supervision. GVCCI iteratively generates synthetic instruction via object detection and trains the VG model with the generated data. We validate our framework in offline and online settings across diverse environments on different VG models. Experimental results show that accumulating synthetic data from GVCCI leads to a steady improvement in VG by up to 56.7% and improves resultant LGRM by up to 29.4%. Furthermore, the qualitative analysis shows that the unadapted VG model often fails to find correct objects due to a strong bias learned from the pre-training data. Finally, we introduce a novel VG dataset for LGRM, consisting of nearly 252k triplets of image-object-instruction from diverse manipulation environments.

* Accepted at IROS2023 
Viaarxiv icon

EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving Object

Jun 08, 2023
Hyunseo Kim, Hye Jung Yoon, Minji Kim, Dong-Sig Han, Byoung-Tak Zhang

Figure 1 for EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving Object
Figure 2 for EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving Object
Figure 3 for EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving Object
Figure 4 for EXOT: Exit-aware Object Tracker for Safe Robotic Manipulation of Moving Object

Current robotic hand manipulation narrowly operates with objects in predictable positions in limited environments. Thus, when the location of the target object deviates severely from the expected location, a robot sometimes responds in an unexpected way, especially when it operates with a human. For safe robot operation, we propose the EXit-aware Object Tracker (EXOT) on a robot hand camera that recognizes an object's absence during manipulation. The robot decides whether to proceed by examining the tracker's bounding box output containing the target object. We adopt an out-of-distribution classifier for more accurate object recognition since trackers can mistrack a background as a target object. To the best of our knowledge, our method is the first approach of applying an out-of-distribution classification technique to a tracker output. We evaluate our method on the first-person video benchmark dataset, TREK-150, and on the custom dataset, RMOT-223, that we collect from the UR5e robot. Then we test our tracker on the UR5e robot in real-time with a conveyor-belt sushi task, to examine the tracker's ability to track target dishes and to determine the exit status. Our tracker shows 38% higher exit-aware performance than a baseline method. The dataset and the code will be released at https://github.com/hskAlena/EXOT.

* 2023 IEEE International Conference on Robotics and Automation (ICRA) 
Viaarxiv icon

Overcoming Weak Visual-Textual Alignment for Video Moment Retrieval

Jun 05, 2023
Minjoon Jung, Youwon Jang, Seongho Choi, Joochan Kim, Jin-Hwa Kim, Byoung-Tak Zhang

Figure 1 for Overcoming Weak Visual-Textual Alignment for Video Moment Retrieval
Figure 2 for Overcoming Weak Visual-Textual Alignment for Video Moment Retrieval
Figure 3 for Overcoming Weak Visual-Textual Alignment for Video Moment Retrieval
Figure 4 for Overcoming Weak Visual-Textual Alignment for Video Moment Retrieval

Video moment retrieval (VMR) aims to identify the specific moment in an untrimmed video for a given natural language query. However, this task is prone to suffer the weak visual-textual alignment problem from query ambiguity, potentially limiting further performance gains and generalization capability. Due to the complex multimodal interactions in videos, a query may not fully cover the relevant details of the corresponding moment, and the moment may contain misaligned and irrelevant frames. To tackle this problem, we propose a straightforward yet effective model, called Background-aware Moment DEtection TRansformer (BM-DETR). Given a target query and its moment, BM-DETR also takes negative queries corresponding to different moments. Specifically, our model learns to predict the target moment from the joint probability of the given query and the complement of negative queries for each candidate frame. In this way, it leverages the surrounding background to consider relative importance, improving moment sensitivity. Extensive experiments on Charades-STA and QVHighlights demonstrate the effectiveness of our model. Moreover, we show that BM-DETR can perform robustly in three challenging VMR scenarios, such as several out-of-distribution test cases, demonstrating superior generalization ability.

* Under Review; Our code is available at https://github.com/minjoong507/BM-DETR 
Viaarxiv icon

L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning

May 23, 2023
Kibeom Kim, Hyundo Lee, Min Whoo Lee, Moonheon Lee, Minsu Lee, Byoung-Tak Zhang

Figure 1 for L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning
Figure 2 for L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning
Figure 3 for L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning
Figure 4 for L-SA: Learning Under-Explored Targets in Multi-Target Reinforcement Learning

Tasks that involve interaction with various targets are called multi-target tasks. When applying general reinforcement learning approaches for such tasks, certain targets that are difficult to access or interact with may be neglected throughout the course of training - a predicament we call Under-explored Target Problem (UTP). To address this problem, we propose L-SA (Learning by adaptive Sampling and Active querying) framework that includes adaptive sampling and active querying. In the L-SA framework, adaptive sampling dynamically samples targets with the highest increase of success rates at a high proportion, resulting in curricular learning from easy to hard targets. Active querying prompts the agent to interact more frequently with under-explored targets that need more experience or exploration. Our experimental results on visual navigation tasks show that the L-SA framework improves sample efficiency as well as success rates on various multi-target tasks with UTP. Also, it is experimentally demonstrated that the cyclic relationship between adaptive sampling and active querying effectively improves the sample richness of under-explored targets and alleviates UTP.

* 17 pages include appendices, it is under-review 
Viaarxiv icon

Learning Geometry-aware Representations by Sketching

Apr 17, 2023
Hyundo Lee, Inwoo Hwang, Hyunsung Go, Won-Seok Choi, Kibeom Kim, Byoung-Tak Zhang

Figure 1 for Learning Geometry-aware Representations by Sketching
Figure 2 for Learning Geometry-aware Representations by Sketching
Figure 3 for Learning Geometry-aware Representations by Sketching
Figure 4 for Learning Geometry-aware Representations by Sketching

Understanding geometric concepts, such as distance and shape, is essential for understanding the real world and also for many vision tasks. To incorporate such information into a visual representation of a scene, we propose learning to represent the scene by sketching, inspired by human behavior. Our method, coined Learning by Sketching (LBS), learns to convert an image into a set of colored strokes that explicitly incorporate the geometric information of the scene in a single inference step without requiring a sketch dataset. A sketch is then generated from the strokes where CLIP-based perceptual loss maintains a semantic similarity between the sketch and the image. We show theoretically that sketching is equivariant with respect to arbitrary affine transformations and thus provably preserves geometric information. Experimental results show that LBS substantially improves the performance of object attribute classification on the unlabeled CLEVR dataset, domain transfer between CLEVR and STL-10 datasets, and for diverse downstream tasks, confirming that LBS provides rich geometric information.

* CVPR 2023 
Viaarxiv icon

SelecMix: Debiased Learning by Contradicting-pair Sampling

Nov 04, 2022
Inwoo Hwang, Sangjun Lee, Yunhyeok Kwak, Seong Joon Oh, Damien Teney, Jin-Hwa Kim, Byoung-Tak Zhang

Figure 1 for SelecMix: Debiased Learning by Contradicting-pair Sampling
Figure 2 for SelecMix: Debiased Learning by Contradicting-pair Sampling
Figure 3 for SelecMix: Debiased Learning by Contradicting-pair Sampling
Figure 4 for SelecMix: Debiased Learning by Contradicting-pair Sampling

Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.

* NeurIPS 2022 
Viaarxiv icon

DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning

Oct 31, 2022
Won-Seok Choi, Dong-Sig Han, Hyundo Lee, Junseok Park, Byoung-Tak Zhang

Figure 1 for DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning
Figure 2 for DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning
Figure 3 for DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning
Figure 4 for DUEL: Adaptive Duplicate Elimination on Working Memory for Self-Supervised Learning

In Self-Supervised Learning (SSL), it is known that frequent occurrences of the collision in which target data and its negative samples share the same class can decrease performance. Especially in real-world data such as crawled data or robot-gathered observations, collisions may occur more often due to the duplicates in the data. To deal with this problem, we claim that sampling negative samples from the adaptively debiased distribution in the memory makes the model more stable than sampling from a biased dataset directly. In this paper, we introduce a novel SSL framework with adaptive Duplicate Elimination (DUEL) inspired by the human working memory. The proposed framework successfully prevents the downstream task performance from degradation due to a dramatic inter-class imbalance.

* 9 pages, 5 figures, submitted to NeurIPS 2022 Workshop on Self-supervised Learning: Theory and Practice.(Accepted, https://sslneurips22.github.io/) 
Viaarxiv icon