Alert button
Picture for Dong Zhou

Dong Zhou

Alert button

An interpretability framework for Similar case matching

Apr 04, 2023
Nankai Lin, Haonan Liu, Jiajun Fang, Dong Zhou, Aimin Yang

Figure 1 for An interpretability framework for Similar case matching
Figure 2 for An interpretability framework for Similar case matching
Figure 3 for An interpretability framework for Similar case matching
Figure 4 for An interpretability framework for Similar case matching

Similar Case Matching (SCM) is designed to determine whether two cases are similar. The task has an essential role in the legal system, helping legal professionals to find relevant cases quickly and thus deal with them more efficiently. Existing research has focused on improving the model's performance but not on its interpretability. Therefore, this paper proposes a pipeline framework for interpretable SCM, which consists of four modules: a judicial feature sentence identification module, a case matching module, a feature sentence alignment module, and a conflict disambiguation module. Unlike existing SCM methods, our framework will identify feature sentences in a case that contain essential information, perform similar case matching based on the extracted feature sentence results, and align the feature sentences in the two cases to provide evidence for the similarity of the cases. SCM results may conflict with feature sentence alignment results, and our framework further disambiguates against this inconsistency. The experimental results show the effectiveness of our framework, and our work provides a new benchmark for interpretable SCM.

Viaarxiv icon

Model and Evaluation: Towards Fairness in Multilingual Text Classification

Mar 28, 2023
Nankai Lin, Junheng He, Zhenghang Tang, Dong Zhou, Aimin Yang

Figure 1 for Model and Evaluation: Towards Fairness in Multilingual Text Classification
Figure 2 for Model and Evaluation: Towards Fairness in Multilingual Text Classification
Figure 3 for Model and Evaluation: Towards Fairness in Multilingual Text Classification
Figure 4 for Model and Evaluation: Towards Fairness in Multilingual Text Classification

Recently, more and more research has focused on addressing bias in text classification models. However, existing research mainly focuses on the fairness of monolingual text classification models, and research on fairness for multilingual text classification is still very limited. In this paper, we focus on the task of multilingual text classification and propose a debiasing framework for multilingual text classification based on contrastive learning. Our proposed method does not rely on any external language resources and can be extended to any other languages. The model contains four modules: multilingual text representation module, language fusion module, text debiasing module, and text classification module. The multilingual text representation module uses a multilingual pre-trained language model to represent the text, the language fusion module makes the semantic spaces of different languages tend to be consistent through contrastive learning, and the text debiasing module uses contrastive learning to make the model unable to identify sensitive attributes' information. The text classification module completes the basic tasks of multilingual text classification. In addition, the existing research on the fairness of multilingual text classification is relatively simple in the evaluation mode. The evaluation method of fairness is the same as the monolingual equality difference evaluation method, that is, the evaluation is performed on a single language. We propose a multi-dimensional fairness evaluation framework for multilingual text classification, which evaluates the model's monolingual equality difference, multilingual equality difference, multilingual equality performance difference, and destructiveness of the fairness strategy. We hope that our work can provide a more general debiasing method and a more comprehensive evaluation framework for multilingual text fairness tasks.

Viaarxiv icon

On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects

Dec 29, 2022
Dong Zhou, Guanghui Sun, Zhao Zhang, Ligang Wu

Figure 1 for On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects
Figure 2 for On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects
Figure 3 for On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects
Figure 4 for On Deep Recurrent Reinforcement Learning for Active Visual Tracking of Space Noncooperative Objects

Active tracking of space noncooperative object that merely relies on vision camera is greatly significant for autonomous rendezvous and debris removal. Considering its Partial Observable Markov Decision Process (POMDP) property, this paper proposes a novel tracker based on deep recurrent reinforcement learning, named as RAMAVT which drives the chasing spacecraft to follow arbitrary space noncooperative object with high-frequency and near-optimal velocity control commands. To further improve the active tracking performance, we introduce Multi-Head Attention (MHA) module and Squeeze-and-Excitation (SE) layer into RAMAVT, which remarkably improve the representative ability of neural network with almost no extra computational cost. Extensive experiments and ablation study implemented on SNCOAT benchmark show the effectiveness and robustness of our method compared with other state-of-the-art algorithm. The source codes are available on https://github.com/Dongzhou-1996/RAMAVT.

Viaarxiv icon

Research on the application of contrastive learning in multi-label text classification

Dec 01, 2022
Nankai Lin, Guanqiu Qin, Jigang Wang, Aimin Yang, Dong Zhou

Figure 1 for Research on the application of contrastive learning in multi-label text classification
Figure 2 for Research on the application of contrastive learning in multi-label text classification
Figure 3 for Research on the application of contrastive learning in multi-label text classification
Figure 4 for Research on the application of contrastive learning in multi-label text classification

The effective application of contrastive learning technology in natural language processing tasks shows the superiority of contrastive learning in text analysis tasks. How to construct positive and negative samples correctly and reasonably is the core challenge of contrastive learning. Since it is difficult to construct contrastive objects in multi-label multi-classification tasks, there are few contrastive losses for multi-label multi-classification text classification. In this paper, we propose five contrastive losses for multi-label multi-classification tasks. They are Strict Contrastive Loss (SCL), Intra-label Contrastive Loss (ICL), Jaccard Similarity Contrastive Loss (JSCL), and Jaccard Similarity Probability Contrastive Loss (JSPCL) and Stepwise Label Contrastive Loss (SLCL). We explore the effectiveness of contrastive learning for multi-label multi-classification tasks under different strategies, and provide a set of baseline methods for contrastive learning techniques on multi-label classification tasks. We also perform an interpretability analysis of our approach to show how different contrastive learning methods play their roles. The experimental results in this paper demonstrate that our proposed contrastive losses can bring some improvement for multi-label multi-classification tasks. Our work reveal how to "appropriately" change the contrastive way of contrastive learning is the key idea to improve the adaptability of contrastive learning in multi-label multi-classification tasks.

Viaarxiv icon

Temporally and Spatially variant-resolution illumination patterns in computational ghost imaging

May 14, 2022
Dong Zhou, Jie Cao, Huan Cui, Li-Xing Lin, Haoyu Zhang, Yingqiang Zhang, Qun Hao

Figure 1 for Temporally and Spatially variant-resolution illumination patterns in computational ghost imaging
Figure 2 for Temporally and Spatially variant-resolution illumination patterns in computational ghost imaging
Figure 3 for Temporally and Spatially variant-resolution illumination patterns in computational ghost imaging
Figure 4 for Temporally and Spatially variant-resolution illumination patterns in computational ghost imaging

Conventional computational ghost imaging (CGI) uses light carrying a sequence of patterns with uniform-resolution to illuminate the object, then performs correlation calculation based on the light intensity value reflected by the target and the preset patterns to obtain object image. It requires a large number of measurements to obtain high-quality images, especially if high-resolution images are to be obtained. To solve this problem, we developed temporally variable-resolution illumination patterns, replacing the conventional uniform-resolution illumination patterns with a sequence of patterns of different imaging resolutions. In addition, we propose to combine temporally variable-resolution illumination patterns and spatially variable-resolution structure to develop temporally and spatially variable-resolution (TSV) illumination patterns, which not only improve the imaging quality of the region of interest (ROI) but also improve the robustness to noise. The methods using proposed illumination patterns are verified by simulations and experiments compared with CGI. For the same number of measurements, the method using temporally variable-resolution illumination patterns has better imaging quality than CGI, but it is less robust to noise. The method using TSV illumination patterns has better imaging quality in ROI than the method using temporally variable-resolution illumination patterns and CGI under the same number of measurements. We also experimentally verify that the method using TSV patterns have better imaging performance when applied to higher resolution imaging. The proposed methods are expected to solve the current computational ghost imaging that is difficult to achieve high-resolution and high-quality imaging.

Viaarxiv icon

GenAD: General Representations of Multivariate Time Seriesfor Anomaly Detection

Feb 09, 2022
Xiaolei Hua, Lin Zhu, Shenglin Zhang, Zeyan Li, Su Wang, Dong Zhou, Shuo Wang, Chao Deng

Figure 1 for GenAD: General Representations of Multivariate Time Seriesfor Anomaly Detection
Figure 2 for GenAD: General Representations of Multivariate Time Seriesfor Anomaly Detection
Figure 3 for GenAD: General Representations of Multivariate Time Seriesfor Anomaly Detection
Figure 4 for GenAD: General Representations of Multivariate Time Seriesfor Anomaly Detection

The reliability of wireless base stations in China Mobile is of vital importance, because the cell phone users are connected to the stations and the behaviors of the stations are directly related to user experience. Although the monitoring of the station behaviors can be realized by anomaly detection on multivariate time series, due to complex correlations and various temporal patterns of multivariate series in large-scale stations, building a general unsupervised anomaly detection model with a higher F1-score remains a challenging task. In this paper, we propose a General representation of multivariate time series for Anomaly Detection(GenAD). First, we pre-train a general model on large-scale wireless base stations with self-supervision, which can be easily transferred to a specific station anomaly detection with a small amount of training data. Second, we employ Multi-Correlation Attention and Time-Series Attention to represent the correlations and temporal patterns of the stations. With the above innovations, GenAD increases F1-score by total 9% on real-world datasets in China Mobile, while the performance does not significantly degrade on public datasets with only 10% of the training data.

Viaarxiv icon

Space Non-cooperative Object Active Tracking with Deep Reinforcement Learning

Dec 18, 2021
Dong Zhou, Guanghui Sun, Wenxiao Lei

Figure 1 for Space Non-cooperative Object Active Tracking with Deep Reinforcement Learning
Figure 2 for Space Non-cooperative Object Active Tracking with Deep Reinforcement Learning
Figure 3 for Space Non-cooperative Object Active Tracking with Deep Reinforcement Learning
Figure 4 for Space Non-cooperative Object Active Tracking with Deep Reinforcement Learning

Active visual tracking of space non-cooperative object is significant for future intelligent spacecraft to realise space debris removal, asteroid exploration, autonomous rendezvous and docking. However, existing works often consider this task into different subproblems (e.g. image preprocessing, feature extraction and matching, position and pose estimation, control law design) and optimize each module alone, which are trivial and sub-optimal. To this end, we propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT. It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images, which significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN. Extensive experiments implemented with diverse network architectures, different perturbations and multiple targets demonstrate the advancement and robustness of DRLAVT. In addition, We further prove our method indeed learnt the motion patterns of target with deep reinforcement learning through hundreds of trial-and-errors.

Viaarxiv icon

3D Visual Tracking Framework with Deep Learning for Asteroid Exploration

Nov 21, 2021
Dong Zhou, Gunaghui Sun, Xiaopeng Hong

Figure 1 for 3D Visual Tracking Framework with Deep Learning for Asteroid Exploration
Figure 2 for 3D Visual Tracking Framework with Deep Learning for Asteroid Exploration
Figure 3 for 3D Visual Tracking Framework with Deep Learning for Asteroid Exploration
Figure 4 for 3D Visual Tracking Framework with Deep Learning for Asteroid Exploration

3D visual tracking is significant to deep space exploration programs, which can guarantee spacecraft to flexibly approach the target. In this paper, we focus on the studied accurate and real-time method for 3D tracking. Considering the fact that there are almost no public dataset for this topic, A new large-scale 3D asteroid tracking dataset is presented, including binocular video sequences, depth maps, and point clouds of diverse asteroids with various shapes and textures. Benefitting from the power and convenience of simulation platform, all the 2D and 3D annotations are automatically generated. Meanwhile, we propose a deep-learning based 3D tracking framework, named as Track3D, which involves 2D monocular tracker and a novel light-weight amodal axis-aligned bounding-box network, A3BoxNet. The evaluation results demonstrate that Track3D achieves state-of-the-art 3D tracking performance in both accuracy and precision, comparing to a baseline algorithm. Moreover, our framework has great generalization ability to 2D monocular tracking performance.

Viaarxiv icon

Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging

Aug 11, 2021
Huan Cui, Jie Cao, Qun Hao, Dong Zhou, Mingyuan Tang, Kaiyu Zhang, Yingqiang Zhang

Figure 1 for Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging
Figure 2 for Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging
Figure 3 for Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging
Figure 4 for Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging

Ghost imaging (GI) is a novel imaging method, which can reconstruct the object information by the light intensity correlation measurements. However, at present, the field of view (FOV) is limited to the illuminating range of the light patterns. To enlarge FOV of GI efficiently, here we proposed the omnidirectional ghost imaging system (OGIS), which can achieve a 360{\deg} omnidirectional FOV at one shot only by adding a curved mirror. Moreover, by designing the retina-like annular patterns with log-polar patterns, OGIS can obtain unwrapping-free undistorted panoramic images with uniform resolution, which opens up a new way for the application of GI.

Viaarxiv icon