Alert button
Picture for Yifan Wang

Yifan Wang

Alert button

Kinetostatic Path Planning for Continuum Robots By Sampling on Implicit Manifold

Sep 16, 2023
Yifan Wang, Yue Chen

Continuum robots (CR) offer excellent dexterity and compliance in contrast to rigid-link robots, making them suitable for navigating through, and interacting with, confined environments. However, the study of path planning for CRs while considering external elastic contact is limited. The challenge lies in the fact that CRs can have multiple possible configurations when in contact, rendering the forward kinematics not well-defined, and characterizing the set of feasible robot configurations as non-trivial. In this paper, we propose to solve this problem by performing quasi-static path planning on an implicit manifold. We model elastic obstacles as external potential fields and formulate the robot statics in the potential field as the extremal trajectory of an optimal control problem obtained by the first-order variational principle. We show that the set of stable robot configurations is a smooth manifold diffeomorphic to a submanifold embedded in the product space of the CR actuation and base internal wrench. We then propose to perform path planning on this manifold using AtlasRRT*, a sampling-based planner dedicated to planning on implicit manifolds. Simulations in different operation scenarios were conducted and the results show that the proposed planner outperforms Euclidean space planners in terms of success rate and computational efficiency.

* 7 pages, 4 figures, submitted to IEEE ICRA 2024 
Viaarxiv icon

VEATIC: Video-based Emotion and Affect Tracking in Context Dataset

Sep 15, 2023
Zhihang Ren, Jefferson Ortega, Yifan Wang, Zhimin Chen, Yunhui Guo, Stella X. Yu, David Whitney

Human affect recognition has been a significant topic in psychophysics and computer vision. However, the currently published datasets have many limitations. For example, most datasets contain frames that contain only information about facial expressions. Due to the limitations of previous datasets, it is very hard to either understand the mechanisms for affect recognition of humans or generalize well on common cases for computer vision models trained on those datasets. In this work, we introduce a brand new large dataset, the Video-based Emotion and Affect Tracking in Context Dataset (VEATIC), that can conquer the limitations of the previous datasets. VEATIC has 124 video clips from Hollywood movies, documentaries, and home videos with continuous valence and arousal ratings of each frame via real-time annotation. Along with the dataset, we propose a new computer vision task to infer the affect of the selected character via both context and character information in each video frame. Additionally, we propose a simple model to benchmark this new computer vision task. We also compare the performance of the pretrained model using our dataset with other similar datasets. Experiments show the competing results of our pretrained model via VEATIC, indicating the generalizability of VEATIC. Our dataset is available at https://veatic.github.io.

Viaarxiv icon

Neural Gradient Regularizer

Sep 13, 2023
Shuang Xu, Yifan Wang, Zixiang Zhao, Jiangjun Peng, Xiangyong Cao, Deyu Meng, Yulun Zhang, Radu Timofte, Luc Van Gool

Figure 1 for Neural Gradient Regularizer
Figure 2 for Neural Gradient Regularizer
Figure 3 for Neural Gradient Regularizer
Figure 4 for Neural Gradient Regularizer

Owing to its significant success, the prior imposed on gradient maps has consistently been a subject of great interest in the field of image processing. Total variation (TV), one of the most representative regularizers, is known for its ability to capture the intrinsic sparsity prior underlying gradient maps. Nonetheless, TV and its variants often underestimate the gradient maps, leading to the weakening of edges and details whose gradients should not be zero in the original image (i.e., image structures is not describable by sparse priors of gradient maps). Recently, total deep variation (TDV) has been introduced, assuming the sparsity of feature maps, which provides a flexible regularization learned from large-scale datasets for a specific task. However, TDV requires to retrain the network with image/task variations, limiting its versatility. To alleviate this issue, in this paper, we propose a neural gradient regularizer (NGR) that expresses the gradient map as the output of a neural network. Unlike existing methods, NGR does not rely on any subjective sparsity or other prior assumptions on image gradient maps, thereby avoiding the underestimation of gradient maps. NGR is applicable to various image types and different image processing tasks, functioning in a zero-shot learning fashion, making it a versatile and plug-and-play regularizer. Extensive experimental results demonstrate the superior performance of NGR over state-of-the-art counterparts for a range of different tasks, further validating its effectiveness and versatility.

Viaarxiv icon

Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning

Aug 14, 2023
Junwen He, Yifan Wang, Lijun Wang, Huchuan Lu, Jun-Yan He, Jin-Peng Lan, Bin Luo, Yifeng Geng, Xuansong Xie

Figure 1 for Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning
Figure 2 for Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning
Figure 3 for Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning
Figure 4 for Towards Deeply Unified Depth-aware Panoptic Segmentation with Bi-directional Guidance Learning

Depth-aware panoptic segmentation is an emerging topic in computer vision which combines semantic and geometric understanding for more robust scene interpretation. Recent works pursue unified frameworks to tackle this challenge but mostly still treat it as two individual learning tasks, which limits their potential for exploring cross-domain information. We propose a deeply unified framework for depth-aware panoptic segmentation, which performs joint segmentation and depth estimation both in a per-segment manner with identical object queries. To narrow the gap between the two tasks, we further design a geometric query enhancement method, which is able to integrate scene geometry into object queries using latent representations. In addition, we propose a bi-directional guidance learning approach to facilitate cross-task feature learning by taking advantage of their mutual relations. Our method sets the new state of the art for depth-aware panoptic segmentation on both Cityscapes-DVPS and SemKITTI-DVPS datasets. Moreover, our guidance learning approach is shown to deliver performance improvement even under incomplete supervision labels.

* to be published in ICCV 2023 
Viaarxiv icon

Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation

Aug 13, 2023
Yichen Yuan, Yifan Wang, Lijun Wang, Xiaoqi Zhao, Huchuan Lu, Yu Wang, Weibo Su, Lei Zhang

Figure 1 for Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation
Figure 2 for Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation
Figure 3 for Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation
Figure 4 for Isomer: Isomerous Transformer for Zero-shot Video Object Segmentation

Recent leading zero-shot video object segmentation (ZVOS) works devote to integrating appearance and motion information by elaborately designing feature fusion modules and identically applying them in multiple feature stages. Our preliminary experiments show that with the strong long-range dependency modeling capacity of Transformer, simply concatenating the two modality features and feeding them to vanilla Transformers for feature fusion can distinctly benefit the performance but at a cost of heavy computation. Through further empirical analysis, we find that attention dependencies learned in Transformer in different stages exhibit completely different properties: global query-independent dependency in the low-level stages and semantic-specific dependency in the high-level stages. Motivated by the observations, we propose two Transformer variants: i) Context-Sharing Transformer (CST) that learns the global-shared contextual information within image frames with a lightweight computation. ii) Semantic Gathering-Scattering Transformer (SGST) that models the semantic correlation separately for the foreground and background and reduces the computation cost with a soft token merging mechanism. We apply CST and SGST for low-level and high-level feature fusions, respectively, formulating a level-isomerous Transformer framework for ZVOS task. Compared with the baseline that uses vanilla Transformers for multi-stage fusion, ours significantly increase the speed by 13 times and achieves new state-of-the-art ZVOS performance. Code is available at https://github.com/DLUT-yyc/Isomer.

* ICCV2023 
Viaarxiv icon

New Interaction Paradigm for Complex EDA Software Leveraging GPT

Jul 27, 2023
Boyu Han, Xinyu Wang, Yifan Wang, Junyu Yan, Yidong Tian

In the rapidly growing field of electronic design automation (EDA), professional software such as KiCad, Cadence , and Altium Designer provide increasingly extensive design functionalities. However, the intricate command structure and high learning curve create a barrier, particularly for novice printed circuit board (PCB) designers. This results in difficulties in selecting appropriate functions or plugins for varying design purposes, compounded by the lack of intuitive learning methods beyond traditional documentation, videos, and online forums. To address this challenge, an artificial intelligence (AI) interaction assist plugin for EDA software named SmartonAl is developed here, also KiCad is taken as the first example. SmartonAI is inspired by the HuggingGPT framework and employs large language models, such as GPT and BERT, to facilitate task planning and execution. On receiving a designer request, SmartonAI conducts a task breakdown and efficiently executes relevant subtasks, such as analysis of help documentation paragraphs and execution of different plugins, along with leveraging the built-in schematic and PCB manipulation functions in both SmartonAl itself and software. Our preliminary results demonstrate that SmartonAI can significantly streamline the PCB design process by simplifying complex commands into intuitive language-based interactions. By harnessing the powerful language capabilities of ChatGPT and the rich design functions of KiCad, the plugin effectively bridges the gap between complex EDA software and user-friendly interaction. Meanwhile, the new paradigm behind SmartonAI can also extend to other complex software systems, illustrating the immense potential of AI-assisted user interfaces in advancing digital interactions across various domains.

Viaarxiv icon