Alert button
Picture for Tianyue Zheng

Tianyue Zheng

Alert button

HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning

Oct 10, 2023
Jingzhi Hu, Zhe Chen, Tianyue Zheng, Robert Schober, Jun Luo

Figure 1 for HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning
Figure 2 for HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning
Figure 3 for HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning
Figure 4 for HoloFed: Environment-Adaptive Positioning via Multi-band Reconfigurable Holographic Surfaces and Federated Learning

Positioning is an essential service for various applications and is expected to be integrated with existing communication infrastructures in 5G and 6G. Though current Wi-Fi and cellular base stations (BSs) can be used to support this integration, the resulting precision is unsatisfactory due to the lack of precise control of the wireless signals. Recently, BSs adopting reconfigurable holographic surfaces (RHSs) have been advocated for positioning as RHSs' large number of antenna elements enable generation of arbitrary and highly-focused signal beam patterns. However, existing designs face two major challenges: i) RHSs only have limited operating bandwidth, and ii) the positioning methods cannot adapt to the diverse environments encountered in practice. To overcome these challenges, we present HoloFed, a system providing high-precision environment-adaptive user positioning services by exploiting multi-band(MB)-RHS and federated learning (FL). For improving the positioning performance, a lower bound on the error variance is obtained and utilized for guiding MB-RHS's digital and analog beamforming design. For better adaptability while preserving privacy, an FL framework is proposed for users to collaboratively train a position estimator, where we exploit the transfer learning technique to handle the lack of position labels of the users. Moreover, a scheduling algorithm for the BS to select which users train the position estimator is designed, jointly considering the convergence and efficiency of FL. Our simulation results confirm that HoloFed achieves a 57% lower positioning error variance compared to a beam-scanning baseline and can effectively adapt to diverse environments.

Viaarxiv icon

OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision

Aug 20, 2023
Shujie Zhang, Tianyue Zheng, Zhe Chen, Jingzhi Hu, Abdelwahed Khamis, Jiajun Liu, Jun Luo

Figure 1 for OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision
Figure 2 for OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision
Figure 3 for OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision
Figure 4 for OCHID-Fi: Occlusion-Robust Hand Pose Estimation in 3D via RF-Vision

Hand Pose Estimation (HPE) is crucial to many applications, but conventional cameras-based CM-HPE methods are completely subject to Line-of-Sight (LoS), as cameras cannot capture occluded objects. In this paper, we propose to exploit Radio-Frequency-Vision (RF-vision) capable of bypassing obstacles for achieving occluded HPE, and we introduce OCHID-Fi as the first RF-HPE method with 3D pose estimation capability. OCHID-Fi employs wideband RF sensors widely available on smart devices (e.g., iPhones) to probe 3D human hand pose and extract their skeletons behind obstacles. To overcome the challenge in labeling RF imaging given its human incomprehensible nature, OCHID-Fi employs a cross-modality and cross-domain training process. It uses a pre-trained CM-HPE network and a synchronized CM/RF dataset, to guide the training of its complex-valued RF-HPE network under LoS conditions. It further transfers knowledge learned from labeled LoS domain to unlabeled occluded domain via adversarial learning, enabling OCHID-Fi to generalize to unseen occluded scenarios. Experimental results demonstrate the superiority of OCHID-Fi: it achieves comparable accuracy to CM-HPE under normal conditions while maintaining such accuracy even in occluded scenarios, with empirical evidence for its generalizability to new domains.

* Accepted to ICCV 2023 
Viaarxiv icon

AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving

Feb 24, 2023
Tianyue Zheng, Ang Li, Zhe Chen, Hongbo Wang, Jun Luo

Figure 1 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 2 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 3 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 4 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving

Object detection with on-board sensors (e.g., lidar, radar, and camera) play a crucial role in autonomous driving (AD), and these sensors complement each other in modalities. While crowdsensing may potentially exploit these sensors (of huge quantity) to derive more comprehensive knowledge, \textit{federated learning} (FL) appears to be the necessary tool to reach this potential: it enables autonomous vehicles (AVs) to train machine learning models without explicitly sharing raw sensory data. However, the multimodal sensors introduce various data heterogeneity across distributed AVs (e.g., label quantity skews and varied modalities), posing critical challenges to effective FL. To this end, we present AutoFed as a heterogeneity-aware FL framework to fully exploit multimodal sensory data on AVs and thus enable robust AD. Specifically, we first propose a novel model leveraging pseudo-labeling to avoid mistakenly treating unlabeled objects as the background. We also propose an autoencoder-based data imputation method to fill missing data modality (of certain AVs) with the available ones. To further reconcile the heterogeneity, we finally present a client selection mechanism exploiting the similarities among client models to improve both training stability and convergence rate. Our experiments on benchmark dataset confirm that AutoFed substantially improves over status quo approaches in both precision and recall, while demonstrating strong robustness to adverse weather conditions.

Viaarxiv icon

Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation

Dec 04, 2021
Tianyue Zheng, Zhe Chen, Shuya Ding, Chao Cai, Jun Luo

Figure 1 for Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation
Figure 2 for Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation
Figure 3 for Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation
Figure 4 for Adv-4-Adv: Thwarting Changing Adversarial Perturbations via Adversarial Domain Adaptation

Whereas adversarial training can be useful against specific adversarial perturbations, they have also proven ineffective in generalizing towards attacks deviating from those used for training. However, we observe that this ineffectiveness is intrinsically connected to domain adaptability, another crucial issue in deep learning for which adversarial domain adaptation appears to be a promising solution. Consequently, we proposed Adv-4-Adv as a novel adversarial training method that aims to retain robustness against unseen adversarial perturbations. Essentially, Adv-4-Adv treats attacks incurring different perturbations as distinct domains, and by leveraging the power of adversarial domain adaptation, it aims to remove the domain/attack-specific features. This forces a trained model to learn a robust domain-invariant representation, which in turn enhances its generalization ability. Extensive evaluations on Fashion-MNIST, SVHN, CIFAR-10, and CIFAR-100 demonstrate that a model trained by Adv-4-Adv based on samples crafted by simple attacks (e.g., FGSM) can be generalized to more advanced attacks (e.g., PGD), and the performance exceeds state-of-the-art proposals on these datasets.

* 9 pages 
Viaarxiv icon

MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar

Nov 16, 2021
Tianyue Zheng, Zhe Chen, Shujie Zhang, Chao Cai, Jun Luo

Figure 1 for MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar
Figure 2 for MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar
Figure 3 for MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar
Figure 4 for MoRe-Fi: Motion-robust and Fine-grained Respiration Monitoring via Deep-Learning UWB Radar

Crucial for healthcare and biomedical applications, respiration monitoring often employs wearable sensors in practice, causing inconvenience due to their direct contact with human bodies. Therefore, researchers have been constantly searching for contact-free alternatives. Nonetheless, existing contact-free designs mostly require human subjects to remain static, largely confining their adoptions in everyday environments where body movements are inevitable. Fortunately, radio-frequency (RF) enabled contact-free sensing, though suffering motion interference inseparable by conventional filtering, may offer a potential to distill respiratory waveform with the help of deep learning. To realize this potential, we introduce MoRe-Fi to conduct fine-grained respiration monitoring under body movements. MoRe-Fi leverages an IR-UWB radar to achieve contact-free sensing, and it fully exploits the complex radar signal for data augmentation. The core of MoRe-Fi is a novel variational encoder-decoder network; it aims to single out the respiratory waveforms that are modulated by body movements in a non-linear manner. Our experiments with 12 subjects and 66-hour data demonstrate that MoRe-Fi accurately recovers respiratory waveform despite the interference caused by body movements. We also discuss potential applications of MoRe-Fi for pulmonary disease diagnoses.

* SenSys '21: Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems November 2021  
* 14 pages 
Viaarxiv icon

RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human Activity Recognition

Oct 29, 2021
Shuya Ding, Zhe Chen, Tianyue Zheng, Jun Luo

Figure 1 for RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human Activity Recognition
Figure 2 for RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human Activity Recognition
Figure 3 for RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human Activity Recognition
Figure 4 for RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human Activity Recognition

Radio-Frequency (RF) based device-free Human Activity Recognition (HAR) rises as a promising solution for many applications. However, device-free (or contactless) sensing is often more sensitive to environment changes than device-based (or wearable) sensing. Also, RF datasets strictly require on-line labeling during collection, starkly different from image and text data collections where human interpretations can be leveraged to perform off-line labeling. Therefore, existing solutions to RF-HAR entail a laborious data collection process for adapting to new environments. To this end, we propose RF-Net as a meta-learning based approach to one-shot RF-HAR; it reduces the labeling efforts for environment adaptation to the minimum level. In particular, we first examine three representative RF sensing techniques and two major meta-learning approaches. The results motivate us to innovate in two designs: i) a dual-path base HAR network, where both time and frequency domains are dedicated to learning powerful RF features including spatial and attention-based temporal ones, and ii) a metric-based meta-learning framework to enhance the fast adaption capability of the base network, including an RF-specific metric module along with a residual classification module. We conduct extensive experiments based on all three RF sensing techniques in multiple real-world indoor environments; all results strongly demonstrate the efficacy of RF-Net compared with state-of-the-art baselines.

* SenSys '20: Proceedings of the 18th Conference on Embedded Networked Sensor Systems, November 2020  
* 14 pages 
Viaarxiv icon

SiWa: See into Walls via Deep UWB Radar

Oct 28, 2021
Tianyue Zheng, Zhe Chen, Jun Luo, Lin Ke, Chaoyang Zhao, Yaowen Yang

Figure 1 for SiWa: See into Walls via Deep UWB Radar
Figure 2 for SiWa: See into Walls via Deep UWB Radar
Figure 3 for SiWa: See into Walls via Deep UWB Radar
Figure 4 for SiWa: See into Walls via Deep UWB Radar

Being able to see into walls is crucial for diagnostics of building health; it enables inspections of wall structure without undermining the structural integrity. However, existing sensing devices do not seem to offer a full capability in mapping the in-wall structure while identifying their status (e.g., seepage and corrosion). In this paper, we design and implement SiWa as a low-cost and portable system for wall inspections. Built upon a customized IR-UWB radar, SiWa scans a wall as a user swipes its probe along the wall surface; it then analyzes the reflected signals to synthesize an image and also to identify the material status. Although conventional schemes exist to handle these problems individually, they require troublesome calibrations that largely prevent them from practical adoptions. To this end, we equip SiWa with a deep learning pipeline to parse the rich sensory data. With an ingenious construction and innovative training, the deep learning modules perform structural imaging and the subsequent analysis on material status, without the need for parameter tuning and calibrations. We build SiWa as a prototype and evaluate its performance via extensive experiments and field studies; results confirm that SiWa accurately maps in-wall structures, identifies their materials, and detects possible failures, suggesting a promising solution for diagnosing building health with lower effort and cost.

* MobiCom '21: Proceedings of the 27th Annual International Conference on Mobile Computing and Networking October 2021  
* 14 pages 
Viaarxiv icon

Enhancing RF Sensing with Deep Learning: A Layered Approach

Oct 28, 2021
Tianyue Zheng, Zhe Chen, Shuya Ding, Jun Luo

Figure 1 for Enhancing RF Sensing with Deep Learning: A Layered Approach
Figure 2 for Enhancing RF Sensing with Deep Learning: A Layered Approach
Figure 3 for Enhancing RF Sensing with Deep Learning: A Layered Approach
Figure 4 for Enhancing RF Sensing with Deep Learning: A Layered Approach

In recent years, radio frequency (RF) sensing has gained increasing popularity due to its pervasiveness, low cost, non-intrusiveness, and privacy preservation. However, realizing the promises of RF sensing is highly nontrivial, given typical challenges such as multipath and interference. One potential solution leverages deep learning to build direct mappings from the RF domain to target domains, hence avoiding complex RF physical modeling. While earlier solutions exploit only simple feature extraction and classification modules, an emerging trend adds functional layers on top of elementary modules for more powerful generalizability and flexible applicability. To better understand this potential, this article takes a layered approach to summarize RF sensing enabled by deep learning. Essentially, we present a four-layer framework: physical, backbone, generalization, and application. While this layered framework provides readers a systematic methodology for designing deep interpreted RF sensing, it also facilitates making improvement proposals and hints at future research opportunities.

* IEEE Communications Magazine ( Volume: 59, Issue: 2, February 2021)  
* 7 pages 
Viaarxiv icon