Alert button
Picture for Yiming Xie

Yiming Xie

Alert button

Imperial College London

OmniControl: Control Any Joint at Any Time for Human Motion Generation

Oct 12, 2023
Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, Huaizu Jiang

We present a novel approach named OmniControl for incorporating flexible spatial control signals into a text-conditioned human motion generation model based on the diffusion process. Unlike previous methods that can only control the pelvis trajectory, OmniControl can incorporate flexible spatial control signals over different joints at different times with only one model. Specifically, we propose analytic spatial guidance that ensures the generated motion can tightly conform to the input control signals. At the same time, realism guidance is introduced to refine all the joints to generate more coherent motion. Both the spatial and realism guidance are essential and they are highly complementary for balancing control accuracy and motion realism. By combining them, OmniControl generates motions that are realistic, coherent, and consistent with the spatial constraints. Experiments on HumanML3D and KIT-ML datasets show that OmniControl not only achieves significant improvement over state-of-the-art methods on pelvis control but also shows promising results when incorporating the constraints over other joints.

* Project page: https://neu-vi.github.io/omnicontrol/ 
Viaarxiv icon

Pixel-Aligned Recurrent Queries for Multi-View 3D Object Detection

Oct 02, 2023
Yiming Xie, Huaizu Jiang, Georgia Gkioxari, Julian Straub

We present PARQ - a multi-view 3D object detector with transformer and pixel-aligned recurrent queries. Unlike previous works that use learnable features or only encode 3D point positions as queries in the decoder, PARQ leverages appearance-enhanced queries initialized from reference points in 3D space and updates their 3D location with recurrent cross-attention operations. Incorporating pixel-aligned features and cross attention enables the model to encode the necessary 3D-to-2D correspondences and capture global contextual information of the input images. PARQ outperforms prior best methods on the ScanNet and ARKitScenes datasets, learns and detects faster, is more robust to distribution shifts in reference points, can leverage additional input views without retraining, and can adapt inference compute by changing the number of recurrent iterations.

* ICCV 2023. Project page: https://ymingxie.github.io/parq 
Viaarxiv icon

Diagnosing Human-object Interaction Detectors

Aug 16, 2023
Fangrui Zhu, Yiming Xie, Weidi Xie, Huaizu Jiang

Figure 1 for Diagnosing Human-object Interaction Detectors
Figure 2 for Diagnosing Human-object Interaction Detectors
Figure 3 for Diagnosing Human-object Interaction Detectors
Figure 4 for Diagnosing Human-object Interaction Detectors

Although we have witnessed significant progress in human-object interaction (HOI) detection with increasingly high mAP (mean Average Precision), a single mAP score is too concise to obtain an informative summary of a model's performance and to understand why one approach is better than another. In this paper, we introduce a diagnosis toolbox for analyzing the error sources of the existing HOI detection models. We first conduct holistic investigations in the pipeline of HOI detection, consisting of human-object pair detection and then interaction classification. We define a set of errors and the oracles to fix each of them. By measuring the mAP improvement obtained from fixing an error using its oracle, we can have a detailed analysis of the significance of different errors. We then delve into the human-object detection and interaction classification, respectively, and check the model's behavior. For the first detection task, we investigate both recall and precision, measuring the coverage of ground-truth human-object pairs as well as the noisiness level in the detections. For the second classification task, we compute mAP for interaction classification only, without considering the detection scores. We also measure the performance of the models in differentiating human-object pairs with and without actual interactions using the AP (Average Precision) score. Our toolbox is applicable for different methods across different datasets and available at https://github.com/neu-vi/Diag-HOI.

Viaarxiv icon

Direct Superpoints Matching for Fast and Robust Point Cloud Registration

Jul 03, 2023
Aniket Gupta, Yiming Xie, Hanumant Singh, Huaizu Jiang

Figure 1 for Direct Superpoints Matching for Fast and Robust Point Cloud Registration
Figure 2 for Direct Superpoints Matching for Fast and Robust Point Cloud Registration
Figure 3 for Direct Superpoints Matching for Fast and Robust Point Cloud Registration
Figure 4 for Direct Superpoints Matching for Fast and Robust Point Cloud Registration

Although deep neural networks endow the downsampled superpoints with discriminative feature representations, directly matching them is usually not used alone in state-of-the-art methods, mainly for two reasons. First, the correspondences are inevitably noisy, so RANSAC-like refinement is usually adopted. Such ad hoc postprocessing, however, is slow and not differentiable, which can not be jointly optimized with feature learning. Second, superpoints are sparse and thus more RANSAC iterations are needed. Existing approaches use the coarse-to-fine strategy to propagate the superpoints correspondences to the point level, which are not discriminative enough and further necessitates the postprocessing refinement. In this paper, we present a simple yet effective approach to extract correspondences by directly matching superpoints using a global softmax layer in an end-to-end manner, which are used to determine the rigid transformation between the source and target point cloud. Compared with methods that directly predict corresponding points, by leveraging the rich information from the superpoints matchings, we can obtain more accurate estimation of the transformation and effectively filter out outliers without any postprocessing refinement. As a result, our approach is not only fast, but also achieves state-of-the-art results on the challenging ModelNet and 3DMatch benchmarks. Our code and model weights will be publicly released.

Viaarxiv icon

PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos

Jun 15, 2022
Yiming Xie, Matheus Gadelha, Fengting Yang, Xiaowei Zhou, Huaizu Jiang

Figure 1 for PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
Figure 2 for PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
Figure 3 for PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos
Figure 4 for PlanarRecon: Real-time 3D Plane Detection and Reconstruction from Posed Monocular Videos

We present PlanarRecon -- a novel framework for globally coherent detection and reconstruction of 3D planes from a posed monocular video. Unlike previous works that detect planes in 2D from a single image, PlanarRecon incrementally detects planes in 3D for each video fragment, which consists of a set of key frames, from a volumetric representation of the scene using neural networks. A learning-based tracking and fusion module is designed to merge planes from previous fragments to form a coherent global plane reconstruction. Such design allows PlanarRecon to integrate observations from multiple views within each fragment and temporal information across different ones, resulting in an accurate and coherent reconstruction of the scene abstraction with low-polygonal geometry. Experiments show that the proposed approach achieves state-of-the-art performances on the ScanNet dataset while being real-time.

* CVPR 2022. Project page: https://neu-vi.github.io/planarrecon/ 
Viaarxiv icon

NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video

Apr 01, 2021
Jiaming Sun, Yiming Xie, Linghao Chen, Xiaowei Zhou, Hujun Bao

Figure 1 for NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Figure 2 for NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Figure 3 for NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Figure 4 for NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video

We present a novel framework named NeuralRecon for real-time 3D scene reconstruction from a monocular video. Unlike previous methods that estimate single-view depth maps separately on each key-frame and fuse them later, we propose to directly reconstruct local surfaces represented as sparse TSDF volumes for each video fragment sequentially by a neural network. A learning-based TSDF fusion module based on gated recurrent units is used to guide the network to fuse features from previous fragments. This design allows the network to capture local smoothness prior and global shape prior of 3D surfaces when sequentially reconstructing the surfaces, resulting in accurate, coherent, and real-time surface reconstruction. The experiments on ScanNet and 7-Scenes datasets show that our system outperforms state-of-the-art methods in terms of both accuracy and speed. To the best of our knowledge, this is the first learning-based system that is able to reconstruct dense coherent 3D geometry in real-time.

* Accepted to CVPR 2021 as Oral Presentation. Project page: https://zju3dv.github.io/neuralrecon/ 
Viaarxiv icon

The Case for Retraining of ML Models for IoT Device Identification at the Edge

Nov 17, 2020
Roman Kolcun, Diana Andreea Popescu, Vadim Safronov, Poonam Yadav, Anna Maria Mandalari, Yiming Xie, Richard Mortier, Hamed Haddadi

Figure 1 for The Case for Retraining of ML Models for IoT Device Identification at the Edge
Figure 2 for The Case for Retraining of ML Models for IoT Device Identification at the Edge
Figure 3 for The Case for Retraining of ML Models for IoT Device Identification at the Edge
Figure 4 for The Case for Retraining of ML Models for IoT Device Identification at the Edge

Internet-of-Things (IoT) devices are known to be the source of many security problems, and as such they would greatly benefit from automated management. This requires robustly identifying devices so that appropriate network security policies can be applied. We address this challenge by exploring how to accurately identify IoT devices based on their network behavior, using resources available at the edge of the network. In this paper, we compare the accuracy of five different machine learning models (tree-based and neural network-based) for identifying IoT devices by using packet trace data from a large IoT test-bed, showing that all models need to be updated over time to avoid significant degradation in accuracy. In order to effectively update the models, we find that it is necessary to use data gathered from the deployment environment, e.g., the household. We therefore evaluate our approach using hardware resources and data sources representative of those that would be available at the edge of the network, such as in an IoT deployment. We show that updating neural network-based models at the edge is feasible, as they require low computational and memory resources and their structure is amenable to being updated. Our results show that it is possible to achieve device identification and categorization with over 80% and 90% accuracy respectively at the edge.

* 13 pages, 8 figures, 4 tables 
Viaarxiv icon

Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation

Apr 07, 2020
Jiaming Sun, Linghao Chen, Yiming Xie, Siyu Zhang, Qinhong Jiang, Xiaowei Zhou, Hujun Bao

Figure 1 for Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation
Figure 2 for Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation
Figure 3 for Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation
Figure 4 for Disp R-CNN: Stereo 3D Object Detection via Shape Prior Guided Instance Disparity Estimation

In this paper, we propose a novel system named Disp R-CNN for 3D object detection from stereo images. Many recent works solve this problem by first recovering a point cloud with disparity estimation and then apply a 3D detector. The disparity map is computed for the entire image, which is costly and fails to leverage category-specific prior. In contrast, we design an instance disparity estimation network (iDispNet) that predicts disparity only for pixels on objects of interest and learns a category-specific shape prior for more accurate disparity estimation. To address the challenge from scarcity of disparity annotation in training, we propose to use a statistical shape model to generate dense disparity pseudo-ground-truth without the need of LiDAR point clouds, which makes our system more widely applicable. Experiments on the KITTI dataset show that, even when LiDAR ground-truth is not available at training time, Disp R-CNN achieves competitive performance and outperforms previous state-of-the-art methods by 20% in terms of average precision.

* Accepted to CVPR 2020. Code is available at https://github.com/zju3dv/disprcnn 
Viaarxiv icon