Alert button
Picture for Shuai Feng

Shuai Feng

Alert button

Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach

Jul 18, 2022
Maopeng Ran, Shuai Feng, Juncheng Li, Lihua Xie

Figure 1 for Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach
Figure 2 for Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach
Figure 3 for Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach
Figure 4 for Quantized Consensus under Data-Rate Constraints and DoS Attacks: A Zooming-In and Holding Approach

This paper is concerned with the quantized consensus problem for uncertain nonlinear multi-agent systems under data-rate constraints and Denial-of-Service (DoS) attacks. The agents are modeled in strict-feedback form with unknown nonlinear dynamics and external disturbance. Extended state observers (ESOs) are leveraged to estimate agents' total uncertainties along with their states. To mitigate the effects of DoS attacks, a novel dynamic quantization with zooming-in and holding capabilities is proposed. The idea is to zoom-in and hold the variable to be quantized if the system is in the absence and presence of DoS attacks, respectively. The control protocol is given in terms of the outputs of the ESOs and the dynamic-quantization-based encoders and decoders. We show that, for a connected undirected network, the developed control protocol is capable of handling any DoS attacks inducing bounded consecutive packet losses with merely 3-level quantization. The application of the zooming-in and holding approach to known linear multi-agent systems is also discussed.

* 16 pages, 8 figures 
Viaarxiv icon

READ: Aggregating Reconstruction Error into Out-of-distribution Detection

Jun 15, 2022
Wenyu Jiang, Hao Cheng, Mingcai Chen, Shuai Feng, Yuxin Ge, Chongjun Wang

Figure 1 for READ: Aggregating Reconstruction Error into Out-of-distribution Detection
Figure 2 for READ: Aggregating Reconstruction Error into Out-of-distribution Detection
Figure 3 for READ: Aggregating Reconstruction Error into Out-of-distribution Detection
Figure 4 for READ: Aggregating Reconstruction Error into Out-of-distribution Detection

Detecting out-of-distribution (OOD) samples is crucial to the safe deployment of a classifier in the real world. However, deep neural networks are known to be overconfident for abnormal data. Existing works directly design score function by mining the inconsistency from classifier for in-distribution (ID) and OOD. In this paper, we further complement this inconsistency with reconstruction error, based on the assumption that an autoencoder trained on ID data can not reconstruct OOD as well as ID. We propose a novel method, READ (Reconstruction Error Aggregated Detector), to unify inconsistencies from classifier and autoencoder. Specifically, the reconstruction error of raw pixels is transformed to latent space of classifier. We show that the transformed reconstruction error bridges the semantic gap and inherits detection performance from the original. Moreover, we propose an adjustment strategy to alleviate the overconfidence problem of autoencoder according to a fine-grained characterization of OOD data. Under two scenarios of pre-training and retraining, we respectively present two variants of our method, namely READ-MD (Mahalanobis Distance) only based on pre-trained classifier and READ-ED (Euclidean Distance) which retrains the classifier. Our methods do not require access to test time OOD data for fine-tuning hyperparameters. Finally, we demonstrate the effectiveness of the proposed methods through extensive comparisons with state-of-the-art OOD detection algorithms. On a CIFAR-10 pre-trained WideResNet, our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art.

Viaarxiv icon