Alert button
Picture for Hongbo Wang

Hongbo Wang

Alert button

DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset

Mar 21, 2023
Hongbo Wang, Weimin Xiong, Yifan Song, Dawei Zhu, Yu Xia, Sujian Li

Figure 1 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 2 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 3 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 4 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset

Joint entity and relation extraction (JERE) is one of the most important tasks in information extraction. However, most existing works focus on sentence-level coarse-grained JERE, which have limitations in real-world scenarios. In this paper, we construct a large-scale document-level fine-grained JERE dataset DocRED-FE, which improves DocRED with Fine-Grained Entity Type. Specifically, we redesign a hierarchical entity type schema including 11 coarse-grained types and 119 fine-grained types, and then re-annotate DocRED manually according to this schema. Through comprehensive experiments we find that: (1) DocRED-FE is challenging to existing JERE models; (2) Our fine-grained entity types promote relation classification. We make DocRED-FE with instruction and the code for our baselines publicly available at https://github.com/PKU-TANGENT/DOCRED-FE.

* Accepted by IEEE ICASSP 2023. The first two authors contribute equally 
Viaarxiv icon

AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving

Feb 24, 2023
Tianyue Zheng, Ang Li, Zhe Chen, Hongbo Wang, Jun Luo

Figure 1 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 2 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 3 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving
Figure 4 for AutoFed: Heterogeneity-Aware Federated Multimodal Learning for Robust Autonomous Driving

Object detection with on-board sensors (e.g., lidar, radar, and camera) play a crucial role in autonomous driving (AD), and these sensors complement each other in modalities. While crowdsensing may potentially exploit these sensors (of huge quantity) to derive more comprehensive knowledge, \textit{federated learning} (FL) appears to be the necessary tool to reach this potential: it enables autonomous vehicles (AVs) to train machine learning models without explicitly sharing raw sensory data. However, the multimodal sensors introduce various data heterogeneity across distributed AVs (e.g., label quantity skews and varied modalities), posing critical challenges to effective FL. To this end, we present AutoFed as a heterogeneity-aware FL framework to fully exploit multimodal sensory data on AVs and thus enable robust AD. Specifically, we first propose a novel model leveraging pseudo-labeling to avoid mistakenly treating unlabeled objects as the background. We also propose an autoencoder-based data imputation method to fill missing data modality (of certain AVs) with the available ones. To further reconcile the heterogeneity, we finally present a client selection mechanism exploiting the similarities among client models to improve both training stability and convergence rate. Our experiments on benchmark dataset confirm that AutoFed substantially improves over status quo approaches in both precision and recall, while demonstrating strong robustness to adverse weather conditions.

Viaarxiv icon

Dynamic Interactional And Cooperative Network For Shield Machine

Nov 17, 2022
Dazhi Gao, Rongyang Li, Hongbo Wang, Lingfeng Mao, Huansheng Ning

Figure 1 for Dynamic Interactional And Cooperative Network For Shield Machine
Figure 2 for Dynamic Interactional And Cooperative Network For Shield Machine
Figure 3 for Dynamic Interactional And Cooperative Network For Shield Machine
Figure 4 for Dynamic Interactional And Cooperative Network For Shield Machine

The shield machine (SM) is a complex mechanical device used for tunneling. However, the monitoring and deciding were mainly done by artificial experience during traditional construction, which brought some limitations, such as hidden mechanical failures, human operator error, and sensor anomalies. To deal with these challenges, many scholars have studied SM intelligent methods. Most of these methods only take SM into account but do not consider the SM operating environment. So, this paper discussed the relationship among SM, geological information, and control terminals. Then, according to the relationship, models were established for the control terminal, including SM rate prediction and SM anomaly detection. The experimental results show that compared with baseline models, the proposed models in this paper perform better. In the proposed model, the R2 and MSE of rate prediction can reach 92.2\%, and 0.0064 respectively. The abnormal detection rate of anomaly detection is up to 98.2\%.

Viaarxiv icon