Alert button
Picture for Qingquan Li

Qingquan Li

Alert button

MUSER: A Multi-View Similar Case Retrieval Dataset

Oct 24, 2023
Qingquan Li, Yiran Hu, Feng Yao, Chaojun Xiao, Zhiyuan Liu, Maosong Sun, Weixing Shen

Figure 1 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 2 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 3 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 4 for MUSER: A Multi-View Similar Case Retrieval Dataset

Similar case retrieval (SCR) is a representative legal AI application that plays a pivotal role in promoting judicial fairness. However, existing SCR datasets only focus on the fact description section when judging the similarity between cases, ignoring other valuable sections (e.g., the court's opinion) that can provide insightful reasoning process behind. Furthermore, the case similarities are typically measured solely by the textual semantics of the fact descriptions, which may fail to capture the full complexity of legal cases from the perspective of legal knowledge. In this work, we present MUSER, a similar case retrieval dataset based on multi-view similarity measurement and comprehensive legal element with sentence-level legal element annotations. Specifically, we select three perspectives (legal fact, dispute focus, and law statutory) and build a comprehensive and structured label schema of legal elements for each of them, to enable accurate and knowledgeable evaluation of case similarities. The constructed dataset originates from Chinese civil cases and contains 100 query cases and 4,024 candidate cases. We implement several text classification algorithms for legal element prediction and various retrieval methods for retrieving similar cases on MUSER. The experimental results indicate that incorporating legal elements can benefit the performance of SCR models, but further efforts are still required to address the remaining challenges posed by MUSER. The source code and dataset are released at https://github.com/THUlawtech/MUSER.

* CIKM 2023  
* Accepted by CIKM 2023 Resource Track 
Viaarxiv icon

Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor

Jul 10, 2023
San Jiang, Yichen Ma, Qingquan Li, Wanshou Jiang, Bingxuan Guo, Lelin Li, Lizhe Wang

Figure 1 for Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
Figure 2 for Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
Figure 3 for Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor
Figure 4 for Efficient Match Pair Retrieval for Large-scale UAV Images via Graph Indexed Global Descriptor

SfM (Structure from Motion) has been extensively used for UAV (Unmanned Aerial Vehicle) image orientation. Its efficiency is directly influenced by feature matching. Although image retrieval has been extensively used for match pair selection, high computational costs are consumed due to a large number of local features and the large size of the used codebook. Thus, this paper proposes an efficient match pair retrieval method and implements an integrated workflow for parallel SfM reconstruction. First, an individual codebook is trained online by considering the redundancy of UAV images and local features, which avoids the ambiguity of training codebooks from other datasets. Second, local features of each image are aggregated into a single high-dimension global descriptor through the VLAD (Vector of Locally Aggregated Descriptors) aggregation by using the trained codebook, which remarkably reduces the number of features and the burden of nearest neighbor searching in image indexing. Third, the global descriptors are indexed via the HNSW (Hierarchical Navigable Small World) based graph structure for the nearest neighbor searching. Match pairs are then retrieved by using an adaptive threshold selection strategy and utilized to create a view graph for divide-and-conquer based parallel SfM reconstruction. Finally, the performance of the proposed solution has been verified using three large-scale UAV datasets. The test results demonstrate that the proposed solution accelerates match pair retrieval with a speedup ratio ranging from 36 to 108 and improves the efficiency of SfM reconstruction with competitive accuracy in both relative and absolute orientation.

Viaarxiv icon

Optimized Views Photogrammetry: Precision Analysis and A Large-scale Case Study in Qingdao

Jun 24, 2022
Qingquan Li, Wenshuai Yu, San Jiang

Figure 1 for Optimized Views Photogrammetry: Precision Analysis and A Large-scale Case Study in Qingdao
Figure 2 for Optimized Views Photogrammetry: Precision Analysis and A Large-scale Case Study in Qingdao
Figure 3 for Optimized Views Photogrammetry: Precision Analysis and A Large-scale Case Study in Qingdao
Figure 4 for Optimized Views Photogrammetry: Precision Analysis and A Large-scale Case Study in Qingdao

UAVs have become one of the widely used remote sensing platforms and played a critical role in the construction of smart cities. However, due to the complex environment in urban scenes, secure and accurate data acquisition brings great challenges to 3D modeling and scene updating. Optimal trajectory planning of UAVs and accurate data collection of onboard cameras are non-trivial issues in urban modeling. This study presents the principle of optimized views photogrammetry and verifies its precision and potential in large-scale 3D modeling. Different from oblique photogrammetry, optimized views photogrammetry uses rough models to generate and optimize UAV trajectories, which is achieved through the consideration of model point reconstructability and view point redundancy. Based on the principle of optimized views photogrammetry, this study first conducts a precision analysis of 3D models by using UAV images of optimized views photogrammetry and then executes a large-scale case study in the urban region of Qingdao city, China, to verify its engineering potential. By using GCPs for image orientation precision analysis and TLS (terrestrial laser scanning) point clouds for model quality analysis, experimental results show that optimized views photogrammetry could construct stable image connection networks and could achieve comparable image orientation accuracy. Benefiting from the accurate image acquisition strategy, the quality of mesh models significantly improves, especially for urban areas with serious occlusions, in which 3 to 5 times of higher accuracy has been achieved. Besides, the case study in Qingdao city verifies that optimized views photogrammetry can be a reliable and powerful solution for the large-scale 3D modeling in complex urban scenes.

* 16 pages, 24 figures 
Viaarxiv icon

Parallel Structure from Motion for UAV Images via Weighted Connected Dominating Set

Jun 24, 2022
San Jiang, Qingquan Li, Wanshou Jiang, Wu Chen

Figure 1 for Parallel Structure from Motion for UAV Images via Weighted Connected Dominating Set
Figure 2 for Parallel Structure from Motion for UAV Images via Weighted Connected Dominating Set
Figure 3 for Parallel Structure from Motion for UAV Images via Weighted Connected Dominating Set
Figure 4 for Parallel Structure from Motion for UAV Images via Weighted Connected Dominating Set

Incremental Structure from Motion (ISfM) has been widely used for UAV image orientation. Its efficiency, however, decreases dramatically due to the sequential constraint. Although the divide-and-conquer strategy has been utilized for efficiency improvement, cluster merging becomes difficult or depends on seriously designed overlap structures. This paper proposes an algorithm to extract the global model for cluster merging and designs a parallel SfM solution to achieve efficient and accurate UAV image orientation. First, based on vocabulary tree retrieval, match pairs are selected to construct an undirected weighted match graph, whose edge weights are calculated by considering both the number and distribution of feature matches. Second, an algorithm, termed weighted connected dominating set (WCDS), is designed to achieve the simplification of the match graph and build the global model, which incorporates the edge weight in the graph node selection and enables the successful reconstruction of the global model. Third, the match graph is simultaneously divided into compact and non-overlapped clusters. After the parallel reconstruction, cluster merging is conducted with the aid of common 3D points between the global and cluster models. Finally, by using three UAV datasets that are captured by classical oblique and recent optimized views photogrammetry, the validation of the proposed solution is verified through comprehensive analysis and comparison. The experimental results demonstrate that the proposed parallel SfM can achieve 17.4 times efficiency improvement and comparative orientation accuracy. In absolute BA, the geo-referencing accuracy is approximately 2.0 and 3.0 times the GSD (Ground Sampling Distance) value in the horizontal and vertical directions, respectively. For parallel SfM, the proposed solution is a more reliable alternative.

* 14 pages, 11 figures 
Viaarxiv icon

Improving short-term bike sharing demand forecast through an irregular convolutional neural network

Feb 11, 2022
Xinyu Li, Yang Xu, Xiaohu Zhang, Wenzhong Shi, Yang Yue, Qingquan Li

Figure 1 for Improving short-term bike sharing demand forecast through an irregular convolutional neural network
Figure 2 for Improving short-term bike sharing demand forecast through an irregular convolutional neural network
Figure 3 for Improving short-term bike sharing demand forecast through an irregular convolutional neural network
Figure 4 for Improving short-term bike sharing demand forecast through an irregular convolutional neural network

As an important task for the management of bike sharing systems, accurate forecast of travel demand could facilitate dispatch and relocation of bicycles to improve user satisfaction. In recent years, many deep learning algorithms have been introduced to improve bicycle usage forecast. A typical practice is to integrate convolutional (CNN) and recurrent neural network (RNN) to capture spatial-temporal dependency in historical travel demand. For typical CNN, the convolution operation is conducted through a kernel that moves across a "matrix-format" city to extract features over spatially adjacent urban areas. This practice assumes that areas close to each other could provide useful information that improves prediction accuracy. However, bicycle usage in neighboring areas might not always be similar, given spatial variations in built environment characteristics and travel behavior that affect cycling activities. Yet, areas that are far apart can be relatively more similar in temporal usage patterns. To utilize the hidden linkage among these distant urban areas, the study proposes an irregular convolutional Long-Short Term Memory model (IrConv+LSTM) to improve short-term bike sharing demand forecast. The model modifies traditional CNN with irregular convolutional architecture to extract dependency among "semantic neighbors". The proposed model is evaluated with a set of benchmark models in five study sites, which include one dockless bike sharing system in Singapore, and four station-based systems in Chicago, Washington, D.C., New York, and London. We find that IrConv+LSTM outperforms other benchmark models in the five cities. The model also achieves superior performance in areas with varying levels of bicycle usage and during peak periods. The findings suggest that "thinking beyond spatial neighbors" can further improve short-term travel demand prediction of urban bike sharing systems.

* 20 pages with 9 figures 
Viaarxiv icon

An adaptive Origin-Destination flows cluster-detecting method to identify urban mobility trends

Jun 10, 2021
Mengyuan Fang, Luliang Tang, Zihan Kan, Xue Yang, Tao Pei, Qingquan Li, Chaokui Li

Figure 1 for An adaptive Origin-Destination flows cluster-detecting method to identify urban mobility trends
Figure 2 for An adaptive Origin-Destination flows cluster-detecting method to identify urban mobility trends
Figure 3 for An adaptive Origin-Destination flows cluster-detecting method to identify urban mobility trends
Figure 4 for An adaptive Origin-Destination flows cluster-detecting method to identify urban mobility trends

Origin-Destination (OD) flow, as an abstract representation of the object`s movement or interaction, has been used to reveal the urban mobility and human-land interaction pattern. As an important spatial analysis approach, the clustering methods of point events have been extended to OD flows to identify the dominant trends and spatial structures of urban mobility. However, the existing methods for OD flow cluster-detecting are limited both in specific spatial scale and the uncertain result due to different parameters setting, which is difficult for complicated OD flows clustering under spatial heterogeneity. To address these limitations, in this paper, we proposed a novel OD flows cluster-detecting method based on the OPTICS algorithm which can identify OD flow clusters with various aggregation scales. The method can adaptively determine parameter value from the dataset without prior knowledge and artificial intervention. Experiments indicated that our method outperformed three state-of-the-art methods with more accurate and complete of clusters and less noise. As a case study, our method is applied to identify the potential routes for public transport service settings by detecting OD flow clusters within urban travel data.

Viaarxiv icon

Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications

Nov 09, 2020
Qing Li, Jiasong Zhu, Jun Liu, Rui Cao, Qingquan Li, Sen Jia, Guoping Qiu

Figure 1 for Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications
Figure 2 for Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications
Figure 3 for Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications
Figure 4 for Deep Learning based Monocular Depth Prediction: Datasets, Methods and Applications

Estimating depth from RGB images can facilitate many computer vision tasks, such as indoor localization, height estimation, and simultaneous localization and mapping (SLAM). Recently, monocular depth estimation has obtained great progress owing to the rapid development of deep learning techniques. They surpass traditional machine learning-based methods by a large margin in terms of accuracy and speed. Despite the rapid progress in this topic, there are lacking of a comprehensive review, which is needed to summarize the current progress and provide the future directions. In this survey, we first introduce the datasets for depth estimation, and then give a comprehensive introduction of the methods from three perspectives: supervised learning-based methods, unsupervised learning-based methods, and sparse samples guidance-based methods. In addition, downstream applications that benefit from the progress have also been illustrated. Finally, we point out the future directions and conclude the paper.

Viaarxiv icon

Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network

Feb 15, 2019
Rui Cao, Qian Zhang, Jiasong Zhu, Qing Li, Qingquan Li, Bozhi Liu, Guoping Qiu

Figure 1 for Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network
Figure 2 for Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network
Figure 3 for Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network
Figure 4 for Enhancing Remote Sensing Image Retrieval with Triplet Deep Metric Learning Network

With the rapid growing of remotely sensed imagery data, there is a high demand for effective and efficient image retrieval tools to manage and exploit such data. In this letter, we present a novel content-based remote sensing image retrieval method based on Triplet deep metric learning convolutional neural network (CNN). By constructing a Triplet network with metric learning objective function, we extract the representative features of the images in a semantic space in which images from the same class are close to each other while those from different classes are far apart. In such a semantic space, simple metric measures such as Euclidean distance can be used directly to compare the similarity of images and effectively retrieve images of the same class. We also investigate a supervised and an unsupervised learning methods for reducing the dimensionality of the learned semantic features. We present comprehensive experimental results on two publicly available remote sensing image retrieval datasets and show that our method significantly outperforms state-of-the-art.

* 5 pages, 7 figures, 3 tables 
Viaarxiv icon