Alert button
Picture for Min Li

Min Li

Alert button

MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation

Aug 17, 2023
Jiaqi Yang, Yucong Chen, Xiangting Meng, Chenxin Yan, Min Li, Ran Chen, Lige Liu, Tao Sun, Laurent Kneip

Figure 1 for MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation
Figure 2 for MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation
Figure 3 for MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation
Figure 4 for MV-ROPE: Multi-view Constraints for Robust Category-level Object Pose and Size Estimation

We propose a novel framework for RGB-based category-level 6D object pose and size estimation. Our approach relies on the prediction of normalized object coordinate space (NOCS), which serves as an efficient and effective object canonical representation that can be extracted from RGB images. Unlike previous approaches that heavily relied on additional depth readings as input, our novelty lies in leveraging multi-view information, which is commonly available in practical scenarios where a moving camera continuously observes the environment. By introducing multi-view constraints, we can obtain accurate camera pose and depth estimation from a monocular dense SLAM framework. Additionally, by incorporating constraints on the camera relative pose, we can apply trimming strategies and robust pose averaging on the multi-view object poses, resulting in more accurate and robust estimations of category-level object poses even in the absence of direct depth readings. Furthermore, we introduce a novel NOCS prediction network that significantly improves performance. Our experimental results demonstrate the strong performance of our proposed method, even comparable to state-of-the-art RGB-D methods across public dataset sequences. Additionally, we showcase the generalization ability of our method by evaluating it on self-collected datasets.

Viaarxiv icon

Fast algorithms for k-submodular maximization subject to a matroid constraint

Jul 26, 2023
Shuxian Niu, Qian Liu, Yang Zhou, Min Li

In this paper, we apply a Threshold-Decreasing Algorithm to maximize $k$-submodular functions under a matroid constraint, which reduces the query complexity of the algorithm compared to the greedy algorithm with little loss in approximation ratio. We give a $(\frac{1}{2} - \epsilon)$-approximation algorithm for monotone $k$-submodular function maximization, and a $(\frac{1}{3} - \epsilon)$-approximation algorithm for non-monotone case, with complexity $O(\frac{n(k\cdot EO + IO)}{\epsilon} \log \frac{r}{\epsilon})$, where $r$ denotes the rank of the matroid, and $IO, EO$ denote the number of oracles to evaluate whether a subset is an independent set and to compute the function value of $f$, respectively. Since the constraint of total size can be looked as a special matroid, called uniform matroid, then we present the fast algorithm for maximizing $k$-submodular functions subject to a total size constraint as corollaries. corollaries.

Viaarxiv icon

Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering

Jul 04, 2023
Jiacheng He, Gang Wang, Kun Zhang, Shan Zhong, Bei Peng, Min Li

Figure 1 for Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering
Figure 2 for Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering
Figure 3 for Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering
Figure 4 for Quantized generalized minimum error entropy for kernel recursive least squares adaptive filtering

The robustness of the kernel recursive least square (KRLS) algorithm has recently been improved by combining them with more robust information-theoretic learning criteria, such as minimum error entropy (MEE) and generalized MEE (GMEE), which also improves the computational complexity of the KRLS-type algorithms to a certain extent. To reduce the computational load of the KRLS-type algorithms, the quantized GMEE (QGMEE) criterion, in this paper, is combined with the KRLS algorithm, and as a result two kinds of KRLS-type algorithms, called quantized kernel recursive MEE (QKRMEE) and quantized kernel recursive GMEE (QKRGMEE), are designed. As well, the mean error behavior, mean square error behavior, and computational complexity of the proposed algorithms are investigated. In addition, simulation and real experimental data are utilized to verify the feasibility of the proposed algorithms.

Viaarxiv icon

INGB: Informed Nonlinear Granular Ball Oversampling Framework for Noisy Imbalanced Classification

Jul 03, 2023
Min Li, Hao Zhou, Qun Liu, Yabin Shao, Guoying Wang

In classification problems, the datasets are usually imbalanced, noisy or complex. Most sampling algorithms only make some improvements to the linear sampling mechanism of the synthetic minority oversampling technique (SMOTE). Nevertheless, linear oversampling has several unavoidable drawbacks. Linear oversampling is susceptible to overfitting, and the synthetic samples lack diversity and rarely account for the original distribution characteristics. An informed nonlinear oversampling framework with the granular ball (INGB) as a new direction of oversampling is proposed in this paper. It uses granular balls to simulate the spatial distribution characteristics of datasets, and informed entropy is utilized to further optimize the granular-ball space. Then, nonlinear oversampling is performed by following high-dimensional sparsity and the isotropic Gaussian distribution. Furthermore, INGB has good compatibility. Not only can it be combined with most SMOTE-based sampling algorithms to improve their performance, but it can also be easily extended to noisy imbalanced multi-classification problems. The mathematical model and theoretical proof of INGB are given in this work. Extensive experiments demonstrate that INGB outperforms the traditional linear sampling frameworks and algorithms in oversampling on complex datasets.

* 15 pages, 6 figures 
Viaarxiv icon

Multimodal Zero-Shot Learning for Tactile Texture Recognition

Jun 22, 2023
Guanqun Cao, Jiaqi Jiang, Danushka Bollegala, Min Li, Shan Luo

Tactile sensing plays an irreplaceable role in robotic material recognition. It enables robots to distinguish material properties such as their local geometry and textures, especially for materials like textiles. However, most tactile recognition methods can only classify known materials that have been touched and trained with tactile data, yet cannot classify unknown materials that are not trained with tactile data. To solve this problem, we propose a tactile zero-shot learning framework to recognise unknown materials when they are touched for the first time without requiring training tactile samples. The visual modality, providing tactile cues from sight, and semantic attributes, giving high-level characteristics, are combined together to bridge the gap between touched classes and untouched classes. A generative model is learnt to synthesise tactile features according to corresponding visual images and semantic embeddings, and then a classifier can be trained using the synthesised tactile features of untouched materials for zero-shot recognition. Extensive experiments demonstrate that our proposed multimodal generative model can achieve a high recognition accuracy of 83.06% in classifying materials that were not touched before. The robotic experiment demo and the dataset are available at https://sites.google.com/view/multimodalzsl.

* Under review at Robotics and Autonomous Systems 
Viaarxiv icon

Fast Segment Anything

Jun 21, 2023
Xu Zhao, Wenchao Ding, Yongqi An, Yinglong Du, Tao Yu, Min Li, Ming Tang, Jinqiao Wang

Figure 1 for Fast Segment Anything
Figure 2 for Fast Segment Anything
Figure 3 for Fast Segment Anything
Figure 4 for Fast Segment Anything

The recently proposed segment anything model (SAM) has made a significant influence in many computer vision tasks. It is becoming a foundation step for many high-level tasks, like image segmentation, image caption, and image editing. However, its huge computation costs prevent it from wider applications in industry scenarios. The computation mainly comes from the Transformer architecture at high-resolution inputs. In this paper, we propose a speed-up alternative method for this fundamental task with comparable performance. By reformulating the task as segments-generation and prompting, we find that a regular CNN detector with an instance segmentation branch can also accomplish this task well. Specifically, we convert this task to the well-studied instance segmentation task and directly train the existing instance segmentation method using only 1/50 of the SA-1B dataset published by SAM authors. With our method, we achieve a comparable performance with the SAM method at 50 times higher run-time speed. We give sufficient experimental results to demonstrate its effectiveness. The codes and demos will be released at https://github.com/CASIA-IVA-Lab/FastSAM.

* Technical Report. The code is released at https://github.com/CASIA-IVA-Lab/FastSAM 
Viaarxiv icon

DeepGate2: Functionality-Aware Circuit Representation Learning

May 25, 2023
Zhengyuan Shi, Hongyang Pan, Sadaf Khan, Min Li, Yi Liu, Junhua Huang, Hui-Ling Zhen, Mingxuan Yuan, Zhufei Chu, Qiang Xu

Figure 1 for DeepGate2: Functionality-Aware Circuit Representation Learning
Figure 2 for DeepGate2: Functionality-Aware Circuit Representation Learning
Figure 3 for DeepGate2: Functionality-Aware Circuit Representation Learning
Figure 4 for DeepGate2: Functionality-Aware Circuit Representation Learning

Circuit representation learning aims to obtain neural representations of circuit elements and has emerged as a promising research direction that can be applied to various EDA and logic reasoning tasks. Existing solutions, such as DeepGate, have the potential to embed both circuit structural information and functional behavior. However, their capabilities are limited due to weak supervision or flawed model design, resulting in unsatisfactory performance in downstream tasks. In this paper, we introduce DeepGate2, a novel functionality-aware learning framework that significantly improves upon the original DeepGate solution in terms of both learning effectiveness and efficiency. Our approach involves using pairwise truth table differences between sampled logic gates as training supervision, along with a well-designed and scalable loss function that explicitly considers circuit functionality. Additionally, we consider inherent circuit characteristics and design an efficient one-round graph neural network (GNN), resulting in an order of magnitude faster learning speed than the original DeepGate solution. Experimental results demonstrate significant improvements in two practical downstream tasks: logic synthesis and Boolean satisfiability solving. The code is available at https://github.com/cure-lab/DeepGate2

Viaarxiv icon

Addressing Variable Dependency in GNN-based SAT Solving

Apr 18, 2023
Zhiyuan Yan, Min Li, Zhengyuan Shi, Wenjie Zhang, Yingcong Chen, Hongce Zhang

Figure 1 for Addressing Variable Dependency in GNN-based SAT Solving
Figure 2 for Addressing Variable Dependency in GNN-based SAT Solving
Figure 3 for Addressing Variable Dependency in GNN-based SAT Solving
Figure 4 for Addressing Variable Dependency in GNN-based SAT Solving

Boolean satisfiability problem (SAT) is fundamental to many applications. Existing works have used graph neural networks (GNNs) for (approximate) SAT solving. Typical GNN-based end-to-end SAT solvers predict SAT solutions concurrently. We show that for a group of symmetric SAT problems, the concurrent prediction is guaranteed to produce a wrong answer because it neglects the dependency among Boolean variables in SAT problems. % We propose AsymSAT, a GNN-based architecture which integrates recurrent neural networks to generate dependent predictions for variable assignments. The experiment results show that dependent variable prediction extends the solving capability of the GNN-based method as it improves the number of solved SAT instances on large test sets.

Viaarxiv icon

DeepSeq: Deep Sequential Circuit Learning

Feb 27, 2023
Sadaf Khan, Zhengyuan Shi, Min Li, Qiang Xu

Figure 1 for DeepSeq: Deep Sequential Circuit Learning
Figure 2 for DeepSeq: Deep Sequential Circuit Learning
Figure 3 for DeepSeq: Deep Sequential Circuit Learning
Figure 4 for DeepSeq: Deep Sequential Circuit Learning

Circuit representation learning is a promising research direction in the electronic design automation (EDA) field. With sufficient data for pre-training, the learned general yet effective representation can help to solve multiple downstream EDA tasks by fine-tuning it on a small set of task-related data. However, existing solutions only target combinational circuits, significantly limiting their applications. In this work, we propose DeepSeq, a novel representation learning framework for sequential netlists. Specifically, we introduce a dedicated graph neural network (GNN) with a customized propagation scheme to exploit the temporal correlations between gates in sequential circuits. To ensure effective learning, we propose to use a multi-task training objective with two sets of strongly related supervision: logic probability and transition probability at each node. A novel dual attention aggregation mechanism is introduced to facilitate learning both tasks efficiently. Experimental results on various benchmark circuits show that DeepSeq outperforms other GNN models for sequential circuit learning. We evaluate the generalization capability of DeepSeq on a downstream power estimation task. After fine-tuning, DeepSeq can accurately estimate power across various circuits under different workloads.

Viaarxiv icon

TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training

Feb 20, 2023
Chang Chen, Min Li, Zhihua Wu, Dianhai Yu, Chao Yang

Figure 1 for TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training
Figure 2 for TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training
Figure 3 for TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training
Figure 4 for TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training

Sparsely gated Mixture-of-Expert (MoE) has demonstrated its effectiveness in scaling up deep neural networks to an extreme scale. Despite that numerous efforts have been made to improve the performance of MoE from the model design or system optimization perspective, existing MoE dispatch patterns are still not able to fully exploit the underlying heterogeneous network environments. In this paper, we propose TA-MoE, a topology-aware routing strategy for large-scale MoE trainging, from a model-system co-design perspective, which can dynamically adjust the MoE dispatch pattern according to the network topology. Based on communication modeling, we abstract the dispatch problem into an optimization objective and obtain the approximate dispatch pattern under different topologies. On top of that, we design a topology-aware auxiliary loss, which can adaptively route the data to fit in the underlying topology without sacrificing the model accuracy. Experiments show that TA-MoE can substantially outperform its counterparts on various hardware and model configurations, with roughly 1.01x-1.61x, 1.01x-4.77x, 1.25x-1.54x improvements over the popular DeepSpeed-MoE, FastMoE and FasterMoE.

Viaarxiv icon