Alert button
Picture for Jingjing Zheng

Jingjing Zheng

Alert button

A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation

May 19, 2023
Jingjing Zheng, Wenzhe Wang, Xiaoqin Zhang, Xianta Jiang

Figure 1 for A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation
Figure 2 for A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation
Figure 3 for A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation
Figure 4 for A Novel Tensor Factorization-Based Method with Robustness to Inaccurate Rank Estimation

This study aims to solve the over-reliance on the rank estimation strategy in the standard tensor factorization-based tensor recovery and the problem of a large computational cost in the standard t-SVD-based tensor recovery. To this end, we proposes a new tensor norm with a dual low-rank constraint, which utilizes the low-rank prior and rank information at the same time. In the proposed tensor norm, a series of surrogate functions of the tensor tubal rank can be used to achieve better performance in harness low-rankness within tensor data. It is proven theoretically that the resulting tensor completion model can effectively avoid performance degradation caused by inaccurate rank estimation. Meanwhile, attributed to the proposed dual low-rank constraint, the t-SVD of a smaller tensor instead of the original big one is computed by using a sample trick. Based on this, the total cost at each iteration of the optimization algorithm is reduced to $\mathcal{O}(n^3\log n +kn^3)$ from $\mathcal{O}(n^4)$ achieved with standard methods, where $k$ is the estimation of the true tensor rank and far less than $n$. Our method was evaluated on synthetic and real-world data, and it demonstrated superior performance and efficiency over several existing state-of-the-art tensor completion methods.

* 14 pages, 8 figures 
Viaarxiv icon

Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation

Jul 30, 2022
Shao-Yuan Lo, Wei Wang, Jim Thomas, Jingjing Zheng, Vishal M. Patel, Cheng-Hao Kuo

Figure 1 for Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation
Figure 2 for Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation
Figure 3 for Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation
Figure 4 for Learning Feature Decomposition for Domain Adaptive Monocular Depth Estimation

Monocular depth estimation (MDE) has attracted intense study due to its low cost and critical functions for robotic tasks such as localization, mapping and obstacle detection. Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations that are expensive to acquire. Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning. However, existing UDA approaches may not completely align the domain gap across different datasets because of the domain shift problem. We believe better domain alignment can be achieved via well-designed feature decomposition. In this paper, we propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components. LFDA only attempts to align the content component since it has a smaller domain gap. Meanwhile, it excludes the style component which is specific to the source domain from training the primary task. Furthermore, LFDA uses separate feature distribution estimations to further bridge the domain gap. Extensive experiments on three domain adaptative MDE scenarios show that the proposed method achieves superior accuracy and lower computational cost compared to the state-of-the-art approaches.

* Accepted at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022 
Viaarxiv icon

DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive Regularizer Learning

May 11, 2022
Xiaoqin Zhang, Ziwei Huang, Jingjing Zheng, Shuo Wang, Xianta Jiang

Figure 1 for DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive Regularizer Learning
Figure 2 for DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive Regularizer Learning
Figure 3 for DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive Regularizer Learning
Figure 4 for DcnnGrasp: Towards Accurate Grasp Pattern Recognition with Adaptive Regularizer Learning

The task of grasp pattern recognition aims to derive the applicable grasp types of an object according to the visual information. Current state-of-the-art methods ignore category information of objects which is crucial for grasp pattern recognition. This paper presents a novel dual-branch convolutional neural network (DcnnGrasp) to achieve joint learning of object category classification and grasp pattern recognition. DcnnGrasp takes object category classification as an auxiliary task to improve the effectiveness of grasp pattern recognition. Meanwhile, a new loss function called joint cross-entropy with an adaptive regularizer is derived through maximizing a posterior, which significantly improves the model performance. Besides, based on the new loss function, a training strategy is proposed to maximize the collaborative learning of the two tasks. The experiment was performed on five household objects datasets including the RGB-D Object dataset, Hit-GPRec dataset, Amsterdam library of object images (ALOI), Columbia University Image Library (COIL-100), and MeganePro dataset 1. The experimental results demonstrated that the proposed method can achieve competitive performance on grasp pattern recognition with several state-of-the-art methods. Specifically, our method even outperformed the second-best one by nearly 15% in terms of global accuracy for the case of testing a novel object on the RGB-D Object dataset.

Viaarxiv icon

Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in EdgeIoT

Feb 15, 2022
Jingjing Zheng, Kai Li, Naram Mhaisen, Wei Ni, Eduardo Tovar, Mohsen Guizani

Figure 1 for Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in EdgeIoT
Figure 2 for Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in EdgeIoT
Figure 3 for Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in EdgeIoT
Figure 4 for Exploring Deep Reinforcement Learning-Assisted Federated Learning for Online Resource Allocation in EdgeIoT

Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT). On the one hand, the learning accuracy of FL can be improved by selecting the IoT devices with large datasets for training, which gives rise to a higher energy consumption. On the other hand, the energy consumption can be reduced by selecting the IoT devices with small datasets for FL, resulting in a falling learning accuracy. In this paper, we formulate a new resource allocation problem for EdgeIoT to balance the learning accuracy of FL and the energy consumption of the IoT device. We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain. Furthermore, long short term memory (LSTM) is leveraged in FL-DLT3 to predict the time-varying network state while FL-DLT3 is trained to select the IoT devices and allocate the transmit power. Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.

Viaarxiv icon

Cross-Domain Visual Recognition via Domain Adaptive Dictionary Learning

Apr 16, 2018
Hongyu Xu, Jingjing Zheng, Azadeh Alavi, Rama Chellappa

Figure 1 for Cross-Domain Visual Recognition via Domain Adaptive Dictionary Learning
Figure 2 for Cross-Domain Visual Recognition via Domain Adaptive Dictionary Learning
Figure 3 for Cross-Domain Visual Recognition via Domain Adaptive Dictionary Learning
Figure 4 for Cross-Domain Visual Recognition via Domain Adaptive Dictionary Learning

In real-world visual recognition problems, the assumption that the training data (source domain) and test data (target domain) are sampled from the same distribution is often violated. This is known as the domain adaptation problem. In this work, we propose a novel domain-adaptive dictionary learning framework for cross-domain visual recognition. Our method generates a set of intermediate domains. These intermediate domains form a smooth path and bridge the gap between the source and target domains. Specifically, we not only learn a common dictionary to encode the domain-shared features, but also learn a set of domain-specific dictionaries to model the domain shift. The separation of the common and domain-specific dictionaries enables us to learn more compact and reconstructive dictionaries for domain adaptation. These dictionaries are learned by alternating between domain-adaptive sparse coding and dictionary updating steps. Meanwhile, our approach gradually recovers the feature representations of both source and target data along the domain path. By aligning all the recovered domain data, we derive the final domain-adaptive features for cross-domain visual recognition. Extensive experiments on three public datasets demonstrates that our approach outperforms most state-of-the-art methods.

* Submitted to IEEE TIP Journal 
Viaarxiv icon