Alert button
Picture for Yonggen Ling

Yonggen Ling

Alert button

Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation

Jul 09, 2023
Boxiang Zhang, Zunran Wang, Yonggen Ling, Yuanyuan Guan, Shenghao Zhang, Wenhui Li

Figure 1 for Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
Figure 2 for Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
Figure 3 for Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation
Figure 4 for Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation

Existing methods of cross-modal domain adaptation for 3D semantic segmentation predict results only via 2D-3D complementarity that is obtained by cross-modal feature matching. However, as lacking supervision in the target domain, the complementarity is not always reliable. The results are not ideal when the domain gap is large. To solve the problem of lacking supervision, we introduce masked modeling into this task and propose a method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap. Our Mx2M contains two components. One is the core solution, cross-modal removal and prediction (xMRP), which makes the Mx2M adapt to various scenarios and provides cross-modal self-supervision. The other is a new way of cross-modal feature matching, the dynamic cross-modal filter (DxMF) that ensures the whole method dynamically uses more suitable 2D-3D complementarity. Evaluation of the Mx2M on three DA scenarios, including Day/Night, USA/Singapore, and A2D2/SemanticKITTI, brings large improvements over previous methods on many metrics.

Viaarxiv icon

A Miniaturised Camera-based Multi-Modal Tactile Sensor

Mar 06, 2023
Kaspar Althoefer, Yonggen Ling, Wanlin Li, Xinyuan Qian, Wang Wei Lee, Peng Qi

Figure 1 for A Miniaturised Camera-based Multi-Modal Tactile Sensor
Figure 2 for A Miniaturised Camera-based Multi-Modal Tactile Sensor
Figure 3 for A Miniaturised Camera-based Multi-Modal Tactile Sensor
Figure 4 for A Miniaturised Camera-based Multi-Modal Tactile Sensor

In conjunction with huge recent progress in camera and computer vision technology, camera-based sensors have increasingly shown considerable promise in relation to tactile sensing. In comparison to competing technologies (be they resistive, capacitive or magnetic based), they offer super-high-resolution, while suffering from fewer wiring problems. The human tactile system is composed of various types of mechanoreceptors, each able to perceive and process distinct information such as force, pressure, texture, etc. Camera-based tactile sensors such as GelSight mainly focus on high-resolution geometric sensing on a flat surface, and their force measurement capabilities are limited by the hysteresis and non-linearity of the silicone material. In this paper, we present a miniaturised dome-shaped camera-based tactile sensor that allows accurate force and tactile sensing in a single coherent system. The key novelty of the sensor design is as follows. First, we demonstrate how to build a smooth silicone hemispheric sensing medium with uniform markers on its curved surface. Second, we enhance the illumination of the rounded silicone with diffused LEDs. Third, we construct a force-sensitive mechanical structure in a compact form factor with usage of springs to accurately perceive forces. Our multi-modal sensor is able to acquire tactile information from multi-axis forces, local force distribution, and contact geometry, all in real-time. We apply an end-to-end deep learning method to process all the information.

Viaarxiv icon

HVC-Net: Unifying Homography, Visibility, and Confidence Learning for Planar Object Tracking

Sep 19, 2022
Haoxian Zhang, Yonggen Ling

Figure 1 for HVC-Net: Unifying Homography, Visibility, and Confidence Learning for Planar Object Tracking
Figure 2 for HVC-Net: Unifying Homography, Visibility, and Confidence Learning for Planar Object Tracking
Figure 3 for HVC-Net: Unifying Homography, Visibility, and Confidence Learning for Planar Object Tracking
Figure 4 for HVC-Net: Unifying Homography, Visibility, and Confidence Learning for Planar Object Tracking

Robust and accurate planar tracking over a whole video sequence is vitally important for many vision applications. The key to planar object tracking is to find object correspondences, modeled by homography, between the reference image and the tracked image. Existing methods tend to obtain wrong correspondences with changing appearance variations, camera-object relative motions and occlusions. To alleviate this problem, we present a unified convolutional neural network (CNN) model that jointly considers homography, visibility, and confidence. First, we introduce correlation blocks that explicitly account for the local appearance changes and camera-object relative motions as the base of our model. Second, we jointly learn the homography and visibility that links camera-object relative motions with occlusions. Third, we propose a confidence module that actively monitors the estimation quality from the pixel correlation distributions obtained in correlation blocks. All these modules are plugged into a Lucas-Kanade (LK) tracking pipeline to obtain both accurate and robust planar object tracking. Our approach outperforms the state-of-the-art methods on public POT and TMT datasets. Its superior performance is also verified on a real-world application, synthesizing high-quality in-video advertisements.

* Accepted to ECCV 2022 
Viaarxiv icon

Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency

Nov 15, 2020
Zidong Guo, Zejian Yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang

Figure 1 for Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency
Figure 2 for Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency
Figure 3 for Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency
Figure 4 for Domain Adaptation Gaze Estimation by Embedding with Prediction Consistency

Gaze is the essential manifestation of human attention. In recent years, a series of work has achieved high accuracy in gaze estimation. However, the inter-personal difference limits the reduction of the subject-independent gaze estimation error. This paper proposes an unsupervised method for domain adaptation gaze estimation to eliminate the impact of inter-personal diversity. In domain adaption, we design an embedding representation with prediction consistency to ensure that the linear relationship between gaze directions in different domains remains consistent on gaze space and embedding space. Specifically, we employ source gaze to form a locally linear representation in the gaze space for each target domain prediction. Then the same linear combinations are applied in the embedding space to generate hypothesis embedding for the target domain sample, remaining prediction consistency. The deviation between the target and source domain is reduced by approximating the predicted and hypothesis embedding for the target domain sample. Guided by the proposed strategy, we design Domain Adaptation Gaze Estimation Network(DAGEN), which learns embedding with prediction consistency and achieves state-of-the-art results on both the MPIIGaze and the EYEDIAP datasets.

* 16 pages, 6 figures, ACCV 2020 (oral) 
Viaarxiv icon

Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation

Jul 16, 2020
Ziyang Song, Zejian Yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang

Figure 1 for Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation
Figure 2 for Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation
Figure 3 for Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation
Figure 4 for Learning End-to-End Action Interaction by Paired-Embedding Data Augmentation

In recognition-based action interaction, robots' responses to human actions are often pre-designed according to recognized categories and thus stiff. In this paper, we specify a new Interactive Action Translation (IAT) task which aims to learn end-to-end action interaction from unlabeled interactive pairs, removing explicit action recognition. To enable learning on small-scale data, we propose a Paired-Embedding (PE) method for effective and reliable data augmentation. Specifically, our method first utilizes paired relationships to cluster individual actions in an embedding space. Then two actions originally paired can be replaced with other actions in their respective neighborhood, assembling into new pairs. An Act2Act network based on conditional GAN follows to learn from augmented data. Besides, IAT-test and IAT-train scores are specifically proposed for evaluating methods on our task. Experimental results on two datasets show impressive effects and broad application prospects of our method.

* 16 pages, 7 figures 
Viaarxiv icon

Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction

Jul 02, 2020
Ziyang Song, Ziyi Yin, Zejian Yuan, Chong Zhang, Wanchao Chi, Yonggen Ling, Shenghao Zhang

Figure 1 for Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction
Figure 2 for Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction
Figure 3 for Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction
Figure 4 for Attention-Oriented Action Recognition for Real-Time Human-Robot Interaction

Despite the notable progress made in action recognition tasks, not much work has been done in action recognition specifically for human-robot interaction. In this paper, we deeply explore the characteristics of the action recognition task in interaction scenarios and propose an attention-oriented multi-level network framework to meet the need for real-time interaction. Specifically, a Pre-Attention network is employed to roughly focus on the interactor in the scene at low resolution firstly and then perform fine-grained pose estimation at high resolution. The other compact CNN receives the extracted skeleton sequence as input for action recognition, utilizing attention-like mechanisms to capture local spatial-temporal patterns and global semantic information effectively. To evaluate our approach, we construct a new action dataset specially for the recognition task in interaction scenarios. Experimental results on our dataset and high efficiency (112 fps at 640 x 480 RGBD) on the mobile computing platform (Nvidia Jetson AGX Xavier) demonstrate excellent applicability of our method on action recognition in real-time human-robot interaction.

* 8 pages, 8 figures 
Viaarxiv icon

Self-supervised Learning of Detailed 3D Face Reconstruction

Oct 25, 2019
Yajing Chen, Fanzi Wu, Zeyu Wang, Yibing Song, Yonggen Ling, Linchao Bao

Figure 1 for Self-supervised Learning of Detailed 3D Face Reconstruction
Figure 2 for Self-supervised Learning of Detailed 3D Face Reconstruction
Figure 3 for Self-supervised Learning of Detailed 3D Face Reconstruction
Figure 4 for Self-supervised Learning of Detailed 3D Face Reconstruction

In this paper, we present an end-to-end learning framework for detailed 3D face reconstruction from a single image. Our approach uses a 3DMM-based coarse model and a displacement map in UV-space to represent a 3D face. Unlike previous work addressing the problem, our learning framework does not require supervision of surrogate ground-truth 3D models computed with traditional approaches. Instead, we utilize the input image itself as supervision during learning. In the first stage, we combine a photometric loss and a facial perceptual loss between the input face and the rendered face, to regress a 3DMM-based coarse model. In the second stage, both the input image and the regressed texture of the coarse model are unwrapped into UV-space, and then sent through an image-toimage translation network to predict a displacement map in UVspace. The displacement map and the coarse model are used to render a final detailed face, which again can be compared with the original input image to serve as a photometric loss for the second stage. The advantage of learning displacement map in UV-space is that face alignment can be explicitly done during the unwrapping, thus facial details are easier to learn from large amount of data. Extensive experiments demonstrate the superiority of the proposed method over previous work.

Viaarxiv icon

MVF-Net: Multi-View 3D Face Morphable Model Regression

Apr 09, 2019
Fanzi Wu, Linchao Bao, Yajing Chen, Yonggen Ling, Yibing Song, Songnan Li, King Ngi Ngan, Wei Liu

Figure 1 for MVF-Net: Multi-View 3D Face Morphable Model Regression
Figure 2 for MVF-Net: Multi-View 3D Face Morphable Model Regression
Figure 3 for MVF-Net: Multi-View 3D Face Morphable Model Regression
Figure 4 for MVF-Net: Multi-View 3D Face Morphable Model Regression

We address the problem of recovering the 3D geometry of a human face from a set of facial images in multiple views. While recent studies have shown impressive progress in 3D Morphable Model (3DMM) based facial reconstruction, the settings are mostly restricted to a single view. There is an inherent drawback in the single-view setting: the lack of reliable 3D constraints can cause unresolvable ambiguities. We in this paper explore 3DMM-based shape recovery in a different setting, where a set of multi-view facial images are given as input. A novel approach is proposed to regress 3DMM parameters from multi-view inputs with an end-to-end trainable Convolutional Neural Network (CNN). Multiview geometric constraints are incorporated into the network by establishing dense correspondences between different views leveraging a novel self-supervised view alignment loss. The main ingredient of the view alignment loss is a differentiable dense optical flow estimator that can backpropagate the alignment errors between an input view and a synthetic rendering from another input view, which is projected to the target view through the 3D shape to be inferred. Through minimizing the view alignment loss, better 3D shapes can be recovered such that the synthetic projections from one view to another can better align with the observed image. Extensive experiments demonstrate the superiority of the proposed method over other 3DMM methods.

* 2019 Conference on Computer Vision and Pattern Recognition 
Viaarxiv icon

High-Precision Online Markerless Stereo Extrinsic Calibration

Mar 26, 2019
Yonggen Ling, Shaojie Shen

Figure 1 for High-Precision Online Markerless Stereo Extrinsic Calibration
Figure 2 for High-Precision Online Markerless Stereo Extrinsic Calibration
Figure 3 for High-Precision Online Markerless Stereo Extrinsic Calibration
Figure 4 for High-Precision Online Markerless Stereo Extrinsic Calibration

Stereo cameras and dense stereo matching algorithms are core components for many robotic applications due to their abilities to directly obtain dense depth measurements and their robustness against changes in lighting conditions. However, the performance of dense depth estimation relies heavily on accurate stereo extrinsic calibration. In this work, we present a real-time markerless approach for obtaining high-precision stereo extrinsic calibration using a novel 5-DOF (degrees-of-freedom) and nonlinear optimization on a manifold, which captures the observability property of vision-only stereo calibration. Our method minimizes epipolar errors between spatial per-frame sparse natural features.It does not require temporal feature correspondences, making it not only invariant to dynamic scenes and illumination changes, but also able to run significantly faster than standard bundle adjustment-based approaches. We introduce a principled method to determine if the calibration converges to the required level of accuracy, and show through online experiments that our approach achieves a level of accuracy that is comparable to offline marker-based calibration methods. Our method refines stereo extrinsic to the accuracy that is sufficient for block matching-based dense disparity computation. It provides a cost-effective way to improve the reliability of stereo vision systems for long-term autonomy.

Viaarxiv icon

Probabilistic Dense Reconstruction from a Moving Camera

Mar 26, 2019
Yonggen Ling, Kaixuan Wang, Shaojie Shen

Figure 1 for Probabilistic Dense Reconstruction from a Moving Camera
Figure 2 for Probabilistic Dense Reconstruction from a Moving Camera
Figure 3 for Probabilistic Dense Reconstruction from a Moving Camera
Figure 4 for Probabilistic Dense Reconstruction from a Moving Camera

This paper presents a probabilistic approach for online dense reconstruction using a single monocular camera moving through the environment. Compared to spatial stereo, depth estimation from motion stereo is challenging due to insufficient parallaxes, visual scale changes, pose errors, etc. We utilize both the spatial and temporal correlations of consecutive depth estimates to increase the robustness and accuracy of monocular depth estimation. An online, recursive, probabilistic scheme to compute depth estimates, with corresponding covariances and inlier probability expectations, is proposed in this work. We integrate the obtained depth hypotheses into dense 3D models in an uncertainty-aware way. We show the effectiveness and efficiency of our proposed approach by comparing it with state-of-the-art methods in the TUM RGB-D SLAM and ICL-NUIM dataset. Online indoor and outdoor experiments are also presented for performance demonstration.

Viaarxiv icon