Alert button
Picture for Martin Jagersand

Martin Jagersand

Alert button

Bridging Low-level Geometry to High-level Concepts in Visual Servoing of Robot Manipulation Task Using Event Knowledge Graphs and Vision-Language Models

Oct 05, 2023
Chen Jiang, Martin Jagersand

In this paper, we propose a framework of building knowledgeable robot control in the scope of smart human-robot interaction, by empowering a basic uncalibrated visual servoing controller with contextual knowledge through the joint usage of event knowledge graphs (EKGs) and large-scale pretrained vision-language models (VLMs). The framework is expanded in twofold: first, we interpret low-level image geometry as high-level concepts, allowing us to prompt VLMs and to select geometric features of points and lines for motor control skills; then, we create an event knowledge graph (EKG) to conceptualize a robot manipulation task of interest, where the main body of the EKG is characterized by an executable behavior tree, and the leaves by semantic concepts relevant to the manipulation context. We demonstrate, in an uncalibrated environment with real robot trials, that our method lowers the reliance of human annotation during task interfacing, allows the robot to perform activities of daily living more easily by treating low-level geometric-based motor control skills as high-level concepts, and is beneficial in building cognitive thinking for smart robot applications.

Viaarxiv icon

CLIPUNetr: Assisting Human-robot Interface for Uncalibrated Visual Servoing Control with CLIP-driven Referring Expression Segmentation

Sep 17, 2023
Chen Jiang, Yuchen Yang, Martin Jagersand

The classical human-robot interface in uncalibrated image-based visual servoing (UIBVS) relies on either human annotations or semantic segmentation with categorical labels. Both methods fail to match natural human communication and convey rich semantics in manipulation tasks as effectively as natural language expressions. In this paper, we tackle this problem by using referring expression segmentation, which is a prompt-based approach, to provide more in-depth information for robot perception. To generate high-quality segmentation predictions from referring expressions, we propose CLIPUNetr - a new CLIP-driven referring expression segmentation network. CLIPUNetr leverages CLIP's strong vision-language representations to segment regions from referring expressions, while utilizing its ``U-shaped'' encoder-decoder architecture to generate predictions with sharper boundaries and finer structures. Furthermore, we propose a new pipeline to integrate CLIPUNetr into UIBVS and apply it to control robots in real-world environments. In experiments, our method improves boundary and structure measurements by an average of 120% and can successfully assist real-world UIBVS control in an unstructured manipulation environment.

Viaarxiv icon

Deep Probabilistic Movement Primitives with a Bayesian Aggregator

Jul 11, 2023
Michael Przystupa, Faezeh Haghverd, Martin Jagersand, Samuele Tosatto

Figure 1 for Deep Probabilistic Movement Primitives with a Bayesian Aggregator
Figure 2 for Deep Probabilistic Movement Primitives with a Bayesian Aggregator
Figure 3 for Deep Probabilistic Movement Primitives with a Bayesian Aggregator
Figure 4 for Deep Probabilistic Movement Primitives with a Bayesian Aggregator

Movement primitives are trainable parametric models that reproduce robotic movements starting from a limited set of demonstrations. Previous works proposed simple linear models that exhibited high sample efficiency and generalization power by allowing temporal modulation of movements (reproducing movements faster or slower), blending (merging two movements into one), via-point conditioning (constraining a movement to meet some particular via-points) and context conditioning (generation of movements based on an observed variable, e.g., position of an object). Previous works have proposed neural network-based motor primitive models, having demonstrated their capacity to perform tasks with some forms of input conditioning or time-modulation representations. However, there has not been a single unified deep motor primitive's model proposed that is capable of all previous operations, limiting neural motor primitive's potential applications. This paper proposes a deep movement primitive architecture that encodes all the operations above and uses a Bayesian context aggregator that allows a more sound context conditioning and blending. Our results demonstrate our approach can scale to reproduce complex motions on a larger variety of input choices compared to baselines while maintaining operations of linear movement primitives provide.

Viaarxiv icon

A Simple Decentralized Cross-Entropy Method

Dec 16, 2022
Zichen Zhang, Jun Jin, Martin Jagersand, Jun Luo, Dale Schuurmans

Figure 1 for A Simple Decentralized Cross-Entropy Method
Figure 2 for A Simple Decentralized Cross-Entropy Method
Figure 3 for A Simple Decentralized Cross-Entropy Method
Figure 4 for A Simple Decentralized Cross-Entropy Method

Cross-Entropy Method (CEM) is commonly used for planning in model-based reinforcement learning (MBRL) where a centralized approach is typically utilized to update the sampling distribution based on only the top-$k$ operation's results on samples. In this paper, we show that such a centralized approach makes CEM vulnerable to local optima, thus impairing its sample efficiency. To tackle this issue, we propose Decentralized CEM (DecentCEM), a simple but effective improvement over classical CEM, by using an ensemble of CEM instances running independently from one another, and each performing a local improvement of its own sampling distribution. We provide both theoretical and empirical analysis to demonstrate the effectiveness of this simple decentralized approach. We empirically show that, compared to the classical centralized approach using either a single or even a mixture of Gaussian distributions, our DecentCEM finds the global optimum much more consistently thus improves the sample efficiency. Furthermore, we plug in our DecentCEM in the planning problem of MBRL, and evaluate our approach in several continuous control environments, with comparison to the state-of-art CEM based MBRL approaches (PETS and POPLIN). Results show sample efficiency improvement by simply replacing the classical CEM module with our DecentCEM module, while only sacrificing a reasonable amount of computational cost. Lastly, we conduct ablation studies for more in-depth analysis. Code is available at https://github.com/vincentzhang/decentCEM

* NeurIPS 2022. The last two authors advised equally 
Viaarxiv icon

Variable-Decision Frequency Option Critic

Dec 11, 2022
Amirmohammad Karimi, Jun Jin, Jun Luo, A. Rupam Mahmood, Martin Jagersand, Samuele Tosatto

Figure 1 for Variable-Decision Frequency Option Critic
Figure 2 for Variable-Decision Frequency Option Critic
Figure 3 for Variable-Decision Frequency Option Critic

In classic reinforcement learning algorithms, agents make decisions at discrete and fixed time intervals. The physical duration between one decision and the next becomes a critical hyperparameter. When this duration is too short, the agent needs to make many decisions to achieve its goal, aggravating the problem's difficulty. But when this duration is too long, the agent becomes incapable of controlling the system. Physical systems, however, do not need a constant control frequency. For learning agents, it is desirable to operate with low frequency when possible and high frequency when necessary. We propose a framework called Continuous-Time Continuous-Options (CTCO), where the agent chooses options as sub-policies of variable durations. Such options are time-continuous and can interact with the system at any desired frequency providing a smooth change of actions. The empirical analysis shows that our algorithm is competitive w.r.t. other time-abstraction techniques, such as classic option learning and action repetition, and practically overcomes the difficult choice of the decision frequency.

* Submitted to the 2023 International Conference on Robotics and Automation (ICRA). Source code at https://github.com/amir-karimi96/continuous-time-continuous-option-policy-gradient.git 
Viaarxiv icon

Generalizable task representation learning from human demonstration videos: a geometric approach

Feb 28, 2022
Jun Jin, Martin Jagersand

Figure 1 for Generalizable task representation learning from human demonstration videos: a geometric approach
Figure 2 for Generalizable task representation learning from human demonstration videos: a geometric approach
Figure 3 for Generalizable task representation learning from human demonstration videos: a geometric approach
Figure 4 for Generalizable task representation learning from human demonstration videos: a geometric approach

We study the problem of generalizable task learning from human demonstration videos without extra training on the robot or pre-recorded robot motions. Given a set of human demonstration videos showing a task with different objects/tools (categorical objects), we aim to learn a representation of visual observation that generalizes to categorical objects and enables efficient controller design. We propose to introduce a geometric task structure to the representation learning problem that geometrically encodes the task specification from human demonstration videos, and that enables generalization by building task specification correspondence between categorical objects. Specifically, we propose CoVGS-IL, which uses a graph-structured task function to learn task representations under structural constraints. Our method enables task generalization by selecting geometric features from different objects whose inner connection relationships define the same task in geometric constraints. The learned task representation is then transferred to a robot controller using uncalibrated visual servoing (UVS); thus, the need for extra robot training or pre-recorded robot motions is removed.

* Accepted in ICRA 2022 
Viaarxiv icon

Analyzing Neural Jacobian Methods in Applications of Visual Servoing and Kinematic Control

Jun 10, 2021
Michael Przystupa, Masood Dehghan, Martin Jagersand, A. Rupam Mahmood

Figure 1 for Analyzing Neural Jacobian Methods in Applications of Visual Servoing and Kinematic Control
Figure 2 for Analyzing Neural Jacobian Methods in Applications of Visual Servoing and Kinematic Control
Figure 3 for Analyzing Neural Jacobian Methods in Applications of Visual Servoing and Kinematic Control
Figure 4 for Analyzing Neural Jacobian Methods in Applications of Visual Servoing and Kinematic Control

Designing adaptable control laws that can transfer between different robots is a challenge because of kinematic and dynamic differences, as well as in scenarios where external sensors are used. In this work, we empirically investigate a neural networks ability to approximate the Jacobian matrix for an application in Cartesian control schemes. Specifically, we are interested in approximating the kinematic Jacobian, which arises from kinematic equations mapping a manipulator's joint angles to the end-effector's location. We propose two different approaches to learn the kinematic Jacobian. The first method arises from visual servoing where we learn the kinematic Jacobian as an approximate linear system of equations from the k-nearest neighbors for a desired joint configuration. The second, motivated by forward models in machine learning, learns the kinematic behavior directly and calculates the Jacobian by differentiating the learned neural kinematics model. Simulation experimental results show that both methods achieve better performance than alternative data-driven methods for control, provide closer approximations to the proper kinematics Jacobian matrix, and on average produce better-conditioned Jacobian matrices. Real-world experiments were conducted on a Kinova Gen-3 lightweight robotic manipulator, which includes an uncalibrated visual servoing experiment, a practical application of our methods, as well as a 7-DOF point-to-point task highlighting that our methods are applicable on real robotic manipulators.

* 8 pages, 6 Figures, https://www.youtube.com/watch?v=mOMIIBLCL20 
Viaarxiv icon

Video Class Agnostic Segmentation with Contrastive Learning for Autonomous Driving

May 11, 2021
Mennatullah Siam, Alex Kendall, Martin Jagersand

Figure 1 for Video Class Agnostic Segmentation with Contrastive Learning for Autonomous Driving
Figure 2 for Video Class Agnostic Segmentation with Contrastive Learning for Autonomous Driving
Figure 3 for Video Class Agnostic Segmentation with Contrastive Learning for Autonomous Driving
Figure 4 for Video Class Agnostic Segmentation with Contrastive Learning for Autonomous Driving

Semantic segmentation in autonomous driving predominantly focuses on learning from large-scale data with a closed set of known classes without considering unknown objects. Motivated by safety reasons, we address the video class agnostic segmentation task, which considers unknown objects outside the closed set of known classes in our training data. We propose a novel auxiliary contrastive loss to learn the segmentation of known classes and unknown objects. Unlike previous work in contrastive learning that samples the anchor, positive and negative examples on an image level, our contrastive learning method leverages pixel-wise semantic and temporal guidance. We conduct experiments on Cityscapes-VPS by withholding four classes from training and show an improvement gain for both known and unknown objects segmentation with the auxiliary contrastive loss. We further release a large-scale synthetic dataset for different autonomous driving scenarios that includes distinct and rare unknown objects. We conduct experiments on the full synthetic dataset and a reduced small-scale version, and show how contrastive learning is more effective in small scale datasets. Our proposed models, dataset, and code will be released at https://github.com/MSiam/video_class_agnostic_segmentation.

Viaarxiv icon

A Quantitative Analysis of Activities of Daily Living: Insights into Improving Functional Independence with Assistive Robotics

Apr 08, 2021
Laura Petrich, Jun Jin, Masood Dehghan, Martin Jagersand

Figure 1 for A Quantitative Analysis of Activities of Daily Living: Insights into Improving Functional Independence with Assistive Robotics
Figure 2 for A Quantitative Analysis of Activities of Daily Living: Insights into Improving Functional Independence with Assistive Robotics
Figure 3 for A Quantitative Analysis of Activities of Daily Living: Insights into Improving Functional Independence with Assistive Robotics
Figure 4 for A Quantitative Analysis of Activities of Daily Living: Insights into Improving Functional Independence with Assistive Robotics

Human assistive robotics have the potential to help the elderly and individuals living with disabilities with their Activities of Daily Living (ADL). Robotics researchers focus on assistive tasks from the perspective of various control schemes and motion types. Health research on the other hand focuses on clinical assessment and rehabilitation, arguably leaving important differences between the two domains. In particular, little is known quantitatively on which ADLs are typically carried out in a persons everyday environment - at home, work, etc. Understanding what activities are frequently carried out during the day can help guide the development and prioritization of robotic technology for in-home assistive robotic deployment. This study targets several lifelogging databases, where we compute (i) ADL task frequency from long-term low sampling frequency video and Internet of Things (IoT) sensor data, and (ii) short term arm and hand movement data from 30 fps video data of domestic tasks. Robotics and health care communities have differing terms and taxonomies for representing tasks and motions. In this work, we derive and discuss a robotics-relevant taxonomy from quantitative ADL task and motion data in attempt to ameliorate taxonomic differences between the two communities. Our quantitative results provide direction for the development of better assistive robots to support the true demands of the healthcare community.

* Submitted to IROS 2021. arXiv admin note: substantial text overlap with arXiv:2101.02750 
Viaarxiv icon