Alert button
Picture for Ming Zhao

Ming Zhao

Alert button

Learn to Augment Network Simulators Towards Digital Network Twins

Nov 21, 2023
Yuru Zhang, Ming Zhao, Qiang Liu

Digital network twin (DNT) is a promising paradigm to replicate real-world cellular networks toward continual assessment, proactive management, and what-if analysis. Existing discussions have been focusing on using only deep learning techniques to build DNTs, which raises widespread concerns regarding their generalization, explainability, and transparency. In this paper, we explore an alternative approach to augment network simulators with context-aware neural agents. The main challenge lies in the non-trivial simulation-to-reality (sim-to-real) discrepancy between offline simulators and real-world networks. To solve the challenge, we propose a new learn-to-bridge algorithm to cost-efficiently bridge the sim-to-real discrepancy in two alternative stages. In the first stage, we select states to query performances in real-world networks by using newly-designed cost-aware Bayesian optimization. In the second stage, we train the neural agent to learn the state context and bridge the probabilistic discrepancy based on Bayesian neural networks (BNN). In addition, we build a small-scale end-to-end network testbed based on OpenAirInterface RAN and Core with USRP B210 and a smartphone, and replicate the network in NS-3. The evaluation results show that, our proposed solution substantially outperforms existing methods, with more than 92\% reduction in the sim-to-real discrepancy.

Viaarxiv icon

Poster: Self-Supervised Quantization-Aware Knowledge Distillation

Sep 22, 2023
Kaiqi Zhao, Ming Zhao

Figure 1 for Poster: Self-Supervised Quantization-Aware Knowledge Distillation
Figure 2 for Poster: Self-Supervised Quantization-Aware Knowledge Distillation
Figure 3 for Poster: Self-Supervised Quantization-Aware Knowledge Distillation
Figure 4 for Poster: Self-Supervised Quantization-Aware Knowledge Distillation

Quantization-aware training (QAT) starts with a pre-trained full-precision model and performs quantization during retraining. However, existing QAT works require supervision from the labels and they suffer from accuracy loss due to reduced precision. To address these limitations, this paper proposes a novel Self-Supervised Quantization-Aware Knowledge Distillation framework (SQAKD). SQAKD first unifies the forward and backward dynamics of various quantization functions and then reframes QAT as a co-optimization problem that simultaneously minimizes the KL-Loss and the discretization error, in a self-supervised manner. The evaluation shows that SQAKD significantly improves the performance of various state-of-the-art QAT works. SQAKD establishes stronger baselines and does not require extensive labeled training data, potentially making state-of-the-art QAT research more accessible.

Viaarxiv icon

Bias of AI-Generated Content: An Examination of News Produced by Large Language Models

Sep 19, 2023
Xiao Fang, Shangkun Che, Minjia Mao, Hongzhe Zhang, Ming Zhao, Xiaohang Zhao

Figure 1 for Bias of AI-Generated Content: An Examination of News Produced by Large Language Models
Figure 2 for Bias of AI-Generated Content: An Examination of News Produced by Large Language Models
Figure 3 for Bias of AI-Generated Content: An Examination of News Produced by Large Language Models
Figure 4 for Bias of AI-Generated Content: An Examination of News Produced by Large Language Models

Large language models (LLMs) have the potential to transform our lives and work through the content they generate, known as AI-Generated Content (AIGC). To harness this transformation, we need to understand the limitations of LLMs. Here, we investigate the bias of AIGC produced by seven representative LLMs, including ChatGPT and LLaMA. We collect news articles from The New York Times and Reuters, both known for their dedication to provide unbiased news. We then apply each examined LLM to generate news content with headlines of these news articles as prompts, and evaluate the gender and racial biases of the AIGC produced by the LLM by comparing the AIGC and the original news articles. We further analyze the gender bias of each LLM under biased prompts by adding gender-biased messages to prompts constructed from these news headlines. Our study reveals that the AIGC produced by each examined LLM demonstrates substantial gender and racial biases. Moreover, the AIGC generated by each LLM exhibits notable discrimination against females and individuals of the Black race. Among the LLMs, the AIGC generated by ChatGPT demonstrates the lowest level of bias, and ChatGPT is the sole model capable of declining content generation when provided with biased prompts.

Viaarxiv icon

UniSA: Unified Generative Framework for Sentiment Analysis

Sep 04, 2023
Zaijing Li, Ting-En Lin, Yuchuan Wu, Meng Liu, Fengxiao Tang, Ming Zhao, Yongbin Li

Figure 1 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 2 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 3 for UniSA: Unified Generative Framework for Sentiment Analysis
Figure 4 for UniSA: Unified Generative Framework for Sentiment Analysis

Sentiment analysis is a crucial task that aims to understand people's emotional states and predict emotional categories based on multimodal information. It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA). However, unifying all subtasks in sentiment analysis presents numerous challenges, including modality alignment, unified input/output forms, and dataset bias. To address these challenges, we propose a Task-Specific Prompt method to jointly model subtasks and introduce a multimodal generative framework called UniSA. Additionally, we organize the benchmark datasets of main subtasks into a new Sentiment Analysis Evaluation benchmark, SAEval. We design novel pre-training tasks and training methods to enable the model to learn generic sentiment knowledge among subtasks to improve the model's multimodal sentiment perception ability. Our experimental results show that UniSA performs comparably to the state-of-the-art on all subtasks and generalizes well to various subtasks in sentiment analysis.

* Accepted to ACM MM 2023 
Viaarxiv icon

Confidence-based federated distillation for vision-based lane-centering

Jun 05, 2023
Yitao Chen, Dawei Chen, Haoxin Wang, Kyungtae Han, Ming Zhao

Figure 1 for Confidence-based federated distillation for vision-based lane-centering
Figure 2 for Confidence-based federated distillation for vision-based lane-centering
Figure 3 for Confidence-based federated distillation for vision-based lane-centering
Figure 4 for Confidence-based federated distillation for vision-based lane-centering

A fundamental challenge of autonomous driving is maintaining the vehicle in the center of the lane by adjusting the steering angle. Recent advances leverage deep neural networks to predict steering decisions directly from images captured by the car cameras. Machine learning-based steering angle prediction needs to consider the vehicle's limitation in uploading large amounts of potentially private data for model training. Federated learning can address these constraints by enabling multiple vehicles to collaboratively train a global model without sharing their private data, but it is difficult to achieve good accuracy as the data distribution is often non-i.i.d. across the vehicles. This paper presents a new confidence-based federated distillation method to improve the performance of federated learning for steering angle prediction. Specifically, it proposes the novel use of entropy to determine the predictive confidence of each local model, and then selects the most confident local model as the teacher to guide the learning of the global model. A comprehensive evaluation of vision-based lane centering shows that the proposed approach can outperform FedAvg and FedDF by 11.3% and 9%, respectively.

* 5 pages, 5 figures 
Viaarxiv icon

Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions

Mar 14, 2023
Kaiqi Zhao, Animesh Jain, Ming Zhao

Figure 1 for Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Figure 2 for Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Figure 3 for Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions
Figure 4 for Automatic Attention Pruning: Improving and Automating Model Pruning using Attentions

Pruning is a promising approach to compress deep learning models in order to deploy them on resource-constrained edge devices. However, many existing pruning solutions are based on unstructured pruning, which yields models that cannot efficiently run on commodity hardware; and they often require users to manually explore and tune the pruning process, which is time-consuming and often leads to sub-optimal results. To address these limitations, this paper presents Automatic Attention Pruning (AAP), an adaptive, attention-based, structured pruning approach to automatically generate small, accurate, and hardware-efficient models that meet user objectives. First, it proposes iterative structured pruning using activation-based attention maps to effectively identify and prune unimportant filters. Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks. A comprehensive evaluation shows that AAP substantially outperforms the state-of-the-art structured pruning works for a variety of model architectures. Our code is at: https://github.com/kaiqi123/Automatic-Attention-Pruning.git.

* arXiv admin note: substantial text overlap with arXiv:2201.10520 
Viaarxiv icon

A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning

Mar 14, 2023
Kaiqi Zhao, Yitao Chen, Ming Zhao

Figure 1 for A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Figure 2 for A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Figure 3 for A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning
Figure 4 for A Contrastive Knowledge Transfer Framework for Model Compression and Transfer Learning

Knowledge Transfer (KT) achieves competitive performance and is widely used for image classification tasks in model compression and transfer learning. Existing KT works transfer the information from a large model ("teacher") to train a small model ("student") by minimizing the difference of their conditionally independent output distributions. However, these works overlook the high-dimension structural knowledge from the intermediate representations of the teacher, which leads to limited effectiveness, and they are motivated by various heuristic intuitions, which makes it difficult to generalize. This paper proposes a novel Contrastive Knowledge Transfer Framework (CKTF), which enables the transfer of sufficient structural knowledge from the teacher to the student by optimizing multiple contrastive objectives across the intermediate representations between them. Also, CKTF provides a generalized agreement to existing KT techniques and increases their performance significantly by deriving them as specific cases of CKTF. The extensive evaluation shows that CKTF consistently outperforms the existing KT works by 0.04% to 11.59% in model compression and by 0.4% to 4.75% in transfer learning on various models and datasets.

Viaarxiv icon

Differentiated Federated Reinforcement Learning for Dynamic and Heterogeneous Network

Dec 05, 2022
Fengxiao Tang, Yilin Yang, Xin Yao, Ming Zhao, Nei Kato

Figure 1 for Differentiated Federated Reinforcement Learning for Dynamic and Heterogeneous Network
Figure 2 for Differentiated Federated Reinforcement Learning for Dynamic and Heterogeneous Network
Figure 3 for Differentiated Federated Reinforcement Learning for Dynamic and Heterogeneous Network
Figure 4 for Differentiated Federated Reinforcement Learning for Dynamic and Heterogeneous Network

The modern dynamic and heterogeneous network brings differential environments with respective state transition probability to agents, which leads to the local strategy trap problem of traditional federated reinforcement learning (FRL) based network optimization algorithm. To solve this problem, we propose a novel Differentiated Federated Reinforcement Learning (DFRL), which evolves the global policy model integration and local inference with the global policy model in traditional FRL to a collaborative learning process with parallel global trends learning and differential local policy model learning. In the DFRL, the local policy learning model is adaptively updated with the global trends model and local environment and achieves better differentiated adaptation. We evaluate the outperformance of the proposal compared with the state-of-the-art FRL in a classical CartPole game with heterogeneous environments. Furthermore, we implement the proposal in the heterogeneous Space-air-ground Integrated Network (SAGIN) for the classical traffic offloading problem in network. The simulation result shows that the proposal shows better global performance and fairness than baselines in terms of throughput, delay, and packet drop rate.

Viaarxiv icon

RGB-X Classification for Electronics Sorting

Sep 08, 2022
FNU Abhimanyu, Tejas Zodage, Umesh Thillaivasan, Xinyue Lai, Rahul Chakwate, Javier Santillan, Emma Oti, Ming Zhao, Ralph Boirum, Howie Choset, Matthew Travers

Figure 1 for RGB-X Classification for Electronics Sorting
Figure 2 for RGB-X Classification for Electronics Sorting
Figure 3 for RGB-X Classification for Electronics Sorting
Figure 4 for RGB-X Classification for Electronics Sorting

Effectively disassembling and recovering materials from waste electrical and electronic equipment (WEEE) is a critical step in moving global supply chains from carbon-intensive, mined materials to recycled and renewable ones. Conventional recycling processes rely on shredding and sorting waste streams, but for WEEE, which is comprised of numerous dissimilar materials, we explore targeted disassembly of numerous objects for improved material recovery. Many WEEE objects share many key features and therefore can look quite similar, but their material composition and internal component layout can vary, and thus it is critical to have an accurate classifier for subsequent disassembly steps for accurate material separation and recovery. This work introduces RGB-X, a multi-modal image classification approach, that utilizes key features from external RGB images with those generated from X-ray images to accurately classify electronic objects. More specifically, this work develops Iterative Class Activation Mapping (iCAM), a novel network architecture that explicitly focuses on the finer-details in the multi-modal feature maps that are needed for accurate electronic object classification. In order to train a classifier, electronic objects lack large and well annotated X-ray datasets due to expense and need of expert guidance. To overcome this issue, we present a novel way of creating a synthetic dataset using domain randomization applied to the X-ray domain. The combined RGB-X approach gives us an accuracy of 98.6% on 10 generations of modern smartphones, which is greater than their individual accuracies of 89.1% (RGB) and 97.9% (X-ray) independently. We provide experimental results3 to corroborate our results.

Viaarxiv icon

OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving

May 30, 2022
Guohang Yan, Liu Zhuochun, Chengjie Wang, Chunlei Shi, Pengjin Wei, Xinyu Cai, Tao Ma, Zhizheng Liu, Zebin Zhong, Yuqian Liu, Ming Zhao, Zheng Ma, Yikang Li

Figure 1 for OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving
Figure 2 for OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving
Figure 3 for OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving
Figure 4 for OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving

Accurate sensor calibration is a prerequisite for multi-sensor perception and localization systems for autonomous vehicles. The intrinsic parameter calibration of the sensor is to obtain the mapping relationship inside the sensor, and the extrinsic parameter calibration is to transform two or more sensors into a unified spatial coordinate system. Most sensors need to be calibrated after installation to ensure the accuracy of sensor measurements. To this end, we present OpenCalib, a calibration toolbox that contains a rich set of various sensor calibration methods. OpenCalib covers manual calibration tools, automatic calibration tools, factory calibration tools, and online calibration tools for different application scenarios. At the same time, to evaluate the calibration accuracy and subsequently improve the accuracy of the calibration algorithm, we released a corresponding benchmark dataset. This paper introduces various features and calibration methods of this toolbox. To our knowledge, this is the first open-sourced calibration codebase containing the full set of autonomous-driving-related calibration approaches in this area. We wish that the toolbox could be helpful to autonomous driving researchers. We have open-sourced our code on GitHub to benefit the community. Code is available at https://github.com/PJLab-ADG/SensorsCalibration.

* 16 pages, 31 figures 
Viaarxiv icon