Alert button
Picture for Nan Zhang

Nan Zhang

Alert button

EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices

Aug 17, 2023
Liang Wang, Nan Zhang, Xiaoyang Qu, Jianzong Wang, Jiguang Wan, Guokuan Li, Kaiyu Hu, Guilin Jiang, Jing Xiao

Figure 1 for EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices
Figure 2 for EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices
Figure 3 for EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices
Figure 4 for EdgeMA: Model Adaptation System for Real-Time Video Analytics on Edge Devices

Real-time video analytics on edge devices for changing scenes remains a difficult task. As edge devices are usually resource-constrained, edge deep neural networks (DNNs) have fewer weights and shallower architectures than general DNNs. As a result, they only perform well in limited scenarios and are sensitive to data drift. In this paper, we introduce EdgeMA, a practical and efficient video analytics system designed to adapt models to shifts in real-world video streams over time, addressing the data drift problem. EdgeMA extracts the gray level co-occurrence matrix based statistical texture feature and uses the Random Forest classifier to detect the domain shift. Moreover, we have incorporated a method of model adaptation based on importance weighting, specifically designed to update models to cope with the label distribution shift. Through rigorous evaluation of EdgeMA on a real-world dataset, our results illustrate that EdgeMA significantly improves inference accuracy.

* Accepted by 30th International Conference on Neural Information Processing (ICONIP 2023) 
Viaarxiv icon

Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning

Jun 27, 2023
Liang Wang, Kai Lu, Nan Zhang, Xiaoyang Qu, Jianzong Wang, Jiguang Wan, Guokuan Li, Jing Xiao

Figure 1 for Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning
Figure 2 for Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning
Figure 3 for Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning
Figure 4 for Shoggoth: Towards Efficient Edge-Cloud Collaborative Real-Time Video Inference via Adaptive Online Learning

This paper proposes Shoggoth, an efficient edge-cloud collaborative architecture, for boosting inference performance on real-time video of changing scenes. Shoggoth uses online knowledge distillation to improve the accuracy of models suffering from data drift and offloads the labeling process to the cloud, alleviating constrained resources of edge devices. At the edge, we design adaptive training using small batches to adapt models under limited computing power, and adaptive sampling of training frames for robustness and reducing bandwidth. The evaluations on the realistic dataset show 15%-20% model accuracy improvement compared to the edge-only strategy and fewer network costs than the cloud-only strategy.

* Accepted by 60th ACM/IEEE Design Automation Conference (DAC2023) 
Viaarxiv icon

Boosting COVID-19 Severity Detection with Infection-aware Contrastive Mixup Classification

Dec 01, 2022
Junlin Hou, Jilan Xu, Nan Zhang, Yuejie Zhang, Xiaobo Zhang, Rui Feng

Figure 1 for Boosting COVID-19 Severity Detection with Infection-aware Contrastive Mixup Classification
Figure 2 for Boosting COVID-19 Severity Detection with Infection-aware Contrastive Mixup Classification
Figure 3 for Boosting COVID-19 Severity Detection with Infection-aware Contrastive Mixup Classification
Figure 4 for Boosting COVID-19 Severity Detection with Infection-aware Contrastive Mixup Classification

This paper presents our solution for the 2nd COVID-19 Severity Detection Competition. This task aims to distinguish the Mild, Moderate, Severe, and Critical grades in COVID-19 chest CT images. In our approach, we devise a novel infection-aware 3D Contrastive Mixup Classification network for severity grading. Specifcally, we train two segmentation networks to first extract the lung region and then the inner lesion region. The lesion segmentation mask serves as complementary information for the original CT slices. To relieve the issue of imbalanced data distribution, we further improve the advanced Contrastive Mixup Classification network by weighted cross-entropy loss. On the COVID-19 severity detection leaderboard, our approach won the first place with a Macro F1 Score of 51.76%. It significantly outperforms the baseline method by over 11.46%.

* ECCV AIMIA Workshop 2022 
Viaarxiv icon

Coordinating Cross-modal Distillation for Molecular Property Prediction

Nov 30, 2022
Hao Zhang, Nan Zhang, Ruixin Zhang, Lei Shen, Yingyi Zhang, Meng Liu

Figure 1 for Coordinating Cross-modal Distillation for Molecular Property Prediction
Figure 2 for Coordinating Cross-modal Distillation for Molecular Property Prediction
Figure 3 for Coordinating Cross-modal Distillation for Molecular Property Prediction
Figure 4 for Coordinating Cross-modal Distillation for Molecular Property Prediction

In recent years, molecular graph representation learning (GRL) has drawn much more attention in molecular property prediction (MPP) problems. The existing graph methods have demonstrated that 3D geometric information is significant for better performance in MPP. However, accurate 3D structures are often costly and time-consuming to obtain, limiting the large-scale application of GRL. It is an intuitive solution to train with 3D to 2D knowledge distillation and predict with only 2D inputs. But some challenging problems remain open for 3D to 2D distillation. One is that the 3D view is quite distinct from the 2D view, and the other is that the gradient magnitudes of atoms in distillation are discrepant and unstable due to the variable molecular size. To address these challenging problems, we exclusively propose a distillation framework that contains global molecular distillation and local atom distillation. We also provide a theoretical insight to justify how to coordinate atom and molecular information, which tackles the drawback of variable molecular size for atom information distillation. Experimental results on two popular molecular datasets demonstrate that our proposed model achieves superior performance over other methods. Specifically, on the largest MPP dataset PCQM4Mv2 served as an "ImageNet Large Scale Visual Recognition Challenge" in the field of graph ML, the proposed method achieved a 6.9% improvement compared with the best works. And we obtained fourth place with the MAE of 0.0734 on the test-challenge set for OGB-LSC 2022 Graph Regression Task. We will release the code soon.

Viaarxiv icon

CMC v2: Towards More Accurate COVID-19 Detection with Discriminative Video Priors

Nov 26, 2022
Junlin Hou, Jilan Xu, Nan Zhang, Yi Wang, Yuejie Zhang, Xiaobo Zhang, Rui Feng

Figure 1 for CMC v2: Towards More Accurate COVID-19 Detection with Discriminative Video Priors
Figure 2 for CMC v2: Towards More Accurate COVID-19 Detection with Discriminative Video Priors
Figure 3 for CMC v2: Towards More Accurate COVID-19 Detection with Discriminative Video Priors
Figure 4 for CMC v2: Towards More Accurate COVID-19 Detection with Discriminative Video Priors

This paper presents our solution for the 2nd COVID-19 Competition, occurring in the framework of the AIMIA Workshop at the European Conference on Computer Vision (ECCV 2022). In our approach, we employ the winning solution last year which uses a strong 3D Contrastive Mixup Classifcation network (CMC v1) as the baseline method, composed of contrastive representation learning and mixup classification. In this paper, we propose CMC v2 by introducing natural video priors to COVID-19 diagnosis. Specifcally, we adapt a pre-trained (on video dataset) video transformer backbone to COVID-19 detection. Moreover, advanced training strategies, including hybrid mixup and cutmix, slicelevel augmentation, and small resolution training are also utilized to boost the robustness and the generalization ability of the model. Among 14 participating teams, CMC v2 ranked 1st in the 2nd COVID-19 Competition with an average Macro F1 Score of 89.11%.

* ECCV AIMIA Workshop 2022 
Viaarxiv icon

Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins

Jul 19, 2022
Nan Zhang, Rami Bahsoon, Nikos Tziritas, Georgios Theodoropoulos

Figure 1 for Explainable Human-in-the-loop Dynamic Data-Driven Digital Twins

Digital Twins (DT) are essentially Dynamic Data-driven models that serve as real-time symbiotic "virtual replicas" of real-world systems. DT can leverage fundamentals of Dynamic Data-Driven Applications Systems (DDDAS) bidirectional symbiotic sensing feedback loops for its continuous updates. Sensing loops can consequently steer measurement, analysis and reconfiguration aimed at more accurate modelling and analysis in DT. The reconfiguration decisions can be autonomous or interactive, keeping human-in-the-loop. The trustworthiness of these decisions can be hindered by inadequate explainability of the rationale, and utility gained in implementing the decision for the given situation among alternatives. Additionally, different decision-making algorithms and models have varying complexity, quality and can result in different utility gained for the model. The inadequacy of explainability can limit the extent to which humans can evaluate the decisions, often leading to updates which are unfit for the given situation, erroneous, compromising the overall accuracy of the model. The novel contribution of this paper is an approach to harnessing explainability in human-in-the-loop DDDAS and DT systems, leveraging bidirectional symbiotic sensing feedback. The approach utilises interpretable machine learning and goal modelling to explainability, and considers trade-off analysis of utility gained. We use examples from smart warehousing to demonstrate the approach.

* 10 pages, 1 figure, submitted to the 4th International Conference on InfoSymbiotics/Dynamic Data Driven Applications Systems (DDDAS2022) 
Viaarxiv icon

DT-SV: A Transformer-based Time-domain Approach for Speaker Verification

May 26, 2022
Nan Zhang, Jianzong Wang, Zhenhou Hong, Chendong Zhao, Xiaoyang Qu, Jing Xiao

Figure 1 for DT-SV: A Transformer-based Time-domain Approach for Speaker Verification
Figure 2 for DT-SV: A Transformer-based Time-domain Approach for Speaker Verification
Figure 3 for DT-SV: A Transformer-based Time-domain Approach for Speaker Verification
Figure 4 for DT-SV: A Transformer-based Time-domain Approach for Speaker Verification

Speaker verification (SV) aims to determine whether the speaker's identity of a test utterance is the same as the reference speech. In the past few years, extracting speaker embeddings using deep neural networks for SV systems has gone mainstream. Recently, different attention mechanisms and Transformer networks have been explored widely in SV fields. However, utilizing the original Transformer in SV directly may have frame-level information waste on output features, which could lead to restrictions on capacity and discrimination of speaker embeddings. Therefore, we propose an approach to derive utterance-level speaker embeddings via a Transformer architecture that uses a novel loss function named diffluence loss to integrate the feature information of different Transformer layers. Therein, the diffluence loss aims to aggregate frame-level features into an utterance-level representation, and it could be integrated into the Transformer expediently. Besides, we also introduce a learnable mel-fbank energy feature extractor named time-domain feature extractor that computes the mel-fbank features more precisely and efficiently than the standard mel-fbank extractor. Combining Diffluence loss and Time-domain feature extractor, we propose a novel Transformer-based time-domain SV model (DT-SV) with faster training speed and higher accuracy. Experiments indicate that our proposed model can achieve better performance in comparison with other models.

* Accepted by IJCNN2022 (The 2022 International Joint Conference on Neural Networks) 
Viaarxiv icon

Knowledge Equivalence in Digital Twins of Intelligent Systems

Apr 15, 2022
Nan Zhang, Rami Bahsoon, Nikos Tziritas, Georgios Theodoropoulos

Figure 1 for Knowledge Equivalence in Digital Twins of Intelligent Systems
Figure 2 for Knowledge Equivalence in Digital Twins of Intelligent Systems
Figure 3 for Knowledge Equivalence in Digital Twins of Intelligent Systems
Figure 4 for Knowledge Equivalence in Digital Twins of Intelligent Systems

A digital twin contains up-to-date data-driven models of the physical world being studied and can use simulation to optimise the physical world. However, the analysis made by the digital twin is valid and reliable only when the model is equivalent to the physical world. Maintaining such an equivalent model is challenging, especially when the physical systems being modelled are intelligent and autonomous. The paper focuses in particular on digital twin models of intelligent systems where the systems are knowledge-aware but with limited capability. The digital twin improves the acting of the physical system at a meta-level by accumulating more knowledge in the simulated environment. The modelling of such an intelligent physical system requires replicating the knowledge-awareness capability in the virtual space. Novel equivalence maintaining techniques are needed, especially in synchronising the knowledge between the model and the physical system. This paper proposes the notion of knowledge equivalence and an equivalence maintaining approach by knowledge comparison and updates. A quantitative analysis of the proposed approach confirms that compared to state equivalence, knowledge equivalence maintenance can tolerate deviation thus reducing unnecessary updates and achieve more Pareto efficient solutions for the trade-off between update overhead and simulation reliability.

* 27 pages, 16 figures. Under review 
Viaarxiv icon

Unsupervised Machine Learning for the Discovery of Latent Disease Clusters and Patient Subgroups Using Electronic Health Records

May 17, 2019
Yanshan Wang, Yiqing Zhao, Terry M. Therneau, Elizabeth J. Atkinson, Ahmad P. Tafti, Nan Zhang, Shreyasee Amin, Andrew H. Limper, Hongfang Liu

Figure 1 for Unsupervised Machine Learning for the Discovery of Latent Disease Clusters and Patient Subgroups Using Electronic Health Records
Figure 2 for Unsupervised Machine Learning for the Discovery of Latent Disease Clusters and Patient Subgroups Using Electronic Health Records
Figure 3 for Unsupervised Machine Learning for the Discovery of Latent Disease Clusters and Patient Subgroups Using Electronic Health Records
Figure 4 for Unsupervised Machine Learning for the Discovery of Latent Disease Clusters and Patient Subgroups Using Electronic Health Records

Machine learning has become ubiquitous and a key technology on mining electronic health records (EHRs) for facilitating clinical research and practice. Unsupervised machine learning, as opposed to supervised learning, has shown promise in identifying novel patterns and relations from EHRs without using human created labels. In this paper, we investigate the application of unsupervised machine learning models in discovering latent disease clusters and patient subgroups based on EHRs. We utilized Latent Dirichlet Allocation (LDA), a generative probabilistic model, and proposed a novel model named Poisson Dirichlet Model (PDM), which extends the LDA approach using a Poisson distribution to model patients' disease diagnoses and to alleviate age and sex factors by considering both observed and expected observations. In the empirical experiments, we evaluated LDA and PDM on three patient cohorts with EHR data retrieved from the Rochester Epidemiology Project (REP), for the discovery of latent disease clusters and patient subgroups. We compared the effectiveness of LDA and PDM in identifying latent disease clusters through the visualization of disease representations learned by two approaches. We also tested the performance of LDA and PDM in differentiating patient subgroups through survival analysis, as well as statistical analysis. The experimental results show that the proposed PDM could effectively identify distinguished disease clusters by alleviating the impact of age and sex, and that LDA could stratify patients into more differentiable subgroups than PDM in terms of p-values. However, the subgroups discovered by PDM might imply the underlying patterns of diseases of greater interest in epidemiology research due to the alleviation of age and sex. Both unsupervised machine learning approaches could be leveraged to discover patient subgroups using EHRs but with different foci.

Viaarxiv icon

Real Time 3D Indoor Human Image Capturing Based on FMCW Radar

Dec 08, 2018
Hanqing Guo, Nan Zhang, Wenjun Shi, Saeed AlQarni, Shaoen Wu

Figure 1 for Real Time 3D Indoor Human Image Capturing Based on FMCW Radar
Figure 2 for Real Time 3D Indoor Human Image Capturing Based on FMCW Radar
Figure 3 for Real Time 3D Indoor Human Image Capturing Based on FMCW Radar
Figure 4 for Real Time 3D Indoor Human Image Capturing Based on FMCW Radar

Most smart systems such as smart home and smart health response to human's locations and activities. However, traditional solutions are either require wearable sensors or lead to leaking privacy. This work proposes an ambient radar solution which is a real-time, privacy secure and dark surroundings resistant system. In this solution, we use a low power, Frequency-Modulated Continuous Wave (FMCW) radar array to capture the reflected signals and then construct to 3D image frames. This solution designs $1)$a data preprocessing mechanism to remove background static reflection, $2)$a signal processing mechanism to transfer received complex radar signals to a matrix contains spacial information, and $3)$ a Deep Learning scheme to filter broken frame which caused by the rough surface of human's body. This solution has been extensively evaluated in a research area and captures real-time human images that are recognizable for specific activities. Our results show that the indoor capturing is clear to be recognized frame by frame compares to camera recorded video.

* conference 
Viaarxiv icon