Alert button
Picture for Jidong Ge

Jidong Ge

Alert button

LawBench: Benchmarking Legal Knowledge of Large Language Models

Sep 28, 2023
Zhiwei Fei, Xiaoyu Shen, Dawei Zhu, Fengzhe Zhou, Zhuo Han, Songyang Zhang, Kai Chen, Zongwen Shen, Jidong Ge

Figure 1 for LawBench: Benchmarking Legal Knowledge of Large Language Models
Figure 2 for LawBench: Benchmarking Legal Knowledge of Large Language Models
Figure 3 for LawBench: Benchmarking Legal Knowledge of Large Language Models
Figure 4 for LawBench: Benchmarking Legal Knowledge of Large Language Models

Large language models (LLMs) have demonstrated strong capabilities in various aspects. However, when applying them to the highly specialized, safe-critical legal domain, it is unclear how much legal knowledge they possess and whether they can reliably perform legal-related tasks. To address this gap, we propose a comprehensive evaluation benchmark LawBench. LawBench has been meticulously crafted to have precise assessment of the LLMs' legal capabilities from three cognitive levels: (1) Legal knowledge memorization: whether LLMs can memorize needed legal concepts, articles and facts; (2) Legal knowledge understanding: whether LLMs can comprehend entities, events and relationships within legal text; (3) Legal knowledge applying: whether LLMs can properly utilize their legal knowledge and make necessary reasoning steps to solve realistic legal tasks. LawBench contains 20 diverse tasks covering 5 task types: single-label classification (SLC), multi-label classification (MLC), regression, extraction and generation. We perform extensive evaluations of 51 LLMs on LawBench, including 20 multilingual LLMs, 22 Chinese-oriented LLMs and 9 legal specific LLMs. The results show that GPT-4 remains the best-performing LLM in the legal domain, surpassing the others by a significant margin. While fine-tuning LLMs on legal specific text brings certain improvements, we are still a long way from obtaining usable and reliable LLMs in legal tasks. All data, model predictions and evaluation code are released in https://github.com/open-compass/LawBench/. We hope this benchmark provides in-depth understanding of the LLMs' domain-specified capabilities and speed up the development of LLMs in the legal domain.

Viaarxiv icon

Judicial Intelligent Assistant System: Extracting Events from Divorce Cases to Detect Disputes for the Judge

Mar 23, 2023
Yuan Zhang, Chuanyi Li, Yu Sheng, Jidong Ge, Bin Luo

Figure 1 for Judicial Intelligent Assistant System: Extracting Events from Divorce Cases to Detect Disputes for the Judge
Figure 2 for Judicial Intelligent Assistant System: Extracting Events from Divorce Cases to Detect Disputes for the Judge
Figure 3 for Judicial Intelligent Assistant System: Extracting Events from Divorce Cases to Detect Disputes for the Judge
Figure 4 for Judicial Intelligent Assistant System: Extracting Events from Divorce Cases to Detect Disputes for the Judge

In formal procedure of civil cases, the textual materials provided by different parties describe the development process of the cases. It is a difficult but necessary task to extract the key information for the cases from these textual materials and to clarify the dispute focus of related parties. Currently, officers read the materials manually and use methods, such as keyword searching and regular matching, to get the target information. These approaches are time-consuming and heavily depending on prior knowledge and carefulness of the officers. To assist the officers to enhance working efficiency and accuracy, we propose an approach to detect disputes from divorce cases based on a two-round-labeling event extracting technique in this paper. We implement the Judicial Intelligent Assistant (JIA) system according to the proposed approach to 1) automatically extract focus events from divorce case materials, 2) align events by identifying co-reference among them, and 3) detect conflicts among events brought by the plaintiff and the defendant. With the JIA system, it is convenient for judges to determine the disputed issues. Experimental results demonstrate that the proposed approach and system can obtain the focus of cases and detect conflicts more effectively and efficiently comparing with existing method.

* 20 pages 
Viaarxiv icon

Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning

Jan 08, 2023
Jidong Ge, Yuxiang Liu, Jie Gui, Lanting Fang, Ming Lin, James Tin-Yau Kwok, LiGuo Huang, Bin Luo

Figure 1 for Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning
Figure 2 for Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning
Figure 3 for Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning
Figure 4 for Learning the Relation between Similarity Loss and Clustering Loss in Self-Supervised Learning

Self-supervised learning enables networks to learn discriminative features from massive data itself. Most state-of-the-art methods maximize the similarity between two augmentations of one image based on contrastive learning. By utilizing the consistency of two augmentations, the burden of manual annotations can be freed. Contrastive learning exploits instance-level information to learn robust features. However, the learned information is probably confined to different views of the same instance. In this paper, we attempt to leverage the similarity between two distinct images to boost representation in self-supervised learning. In contrast to instance-level information, the similarity between two distinct images may provide more useful information. Besides, we analyze the relation between similarity loss and feature-level cross-entropy loss. These two losses are essential for most deep learning methods. However, the relation between these two losses is not clear. Similarity loss helps obtain instance-level representation, while feature-level cross-entropy loss helps mine the similarity between two distinct images. We provide theoretical analyses and experiments to show that a suitable combination of these two losses can get state-of-the-art results.

Viaarxiv icon

MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages

Aug 27, 2022
Qingyu Zhang, Xiaoyu Shen, Ernie Chang, Jidong Ge, Pengke Chen

Figure 1 for MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages
Figure 2 for MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages
Figure 3 for MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages
Figure 4 for MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages

Owing to the lack of corpora for low-resource languages, current works on dialogue generation have mainly focused on English. In this paper, we present mDIA, the first large-scale multilingual benchmark for dialogue generation across low- to high-resource languages. It covers real-life conversations in 46 languages across 19 language families. We present baseline results obtained by fine-tuning the multilingual, non-dialogue-focused pre-trained model mT5 as well as English-centric, dialogue-focused pre-trained chatbot DialoGPT. The results show that mT5-based models perform better on sacreBLEU and BertScore but worse on diversity. Even though promising results are found in few-shot and zero-shot scenarios, there is a large gap between the generation quality in English and other languages. We hope that the release of mDIA could encourage more works on multilingual dialogue generation to promote language diversity.

* The dataset and processing scripts are available in https://github.com/DoctorDream/mDIA 
Viaarxiv icon

Neural Program Repair: Systems, Challenges and Solutions

Feb 22, 2022
Wenkang Zhong, Chuanyi Li, Jidong Ge, Bin Luo

Figure 1 for Neural Program Repair: Systems, Challenges and Solutions
Figure 2 for Neural Program Repair: Systems, Challenges and Solutions
Figure 3 for Neural Program Repair: Systems, Challenges and Solutions
Figure 4 for Neural Program Repair: Systems, Challenges and Solutions

Automated Program Repair (APR) aims to automatically fix bugs in the source code. Recently, as advances in Deep Learning (DL) field, there is a rise of Neural Program Repair (NPR) studies, which formulate APR as a translation task from buggy code to correct code and adopt neural networks based on encoder-decoder architecture. Compared with other APR techniques, NPR approaches have a great advantage in applicability because they do not need any specification (i.e., a test suite). Although NPR has been a hot research direction, there isn't any overview on this field yet. In order to help interested readers understand architectures, challenges and corresponding solutions of existing NPR systems, we conduct a literature review on latest studies in this paper. We begin with introducing the background knowledge on this field. Next, to be understandable, we decompose the NPR procedure into a series of modules and explicate various design choices on each module. Furthermore, we identify several challenges and discuss the effect of existing solutions. Finally, we conclude and provide some promising directions for future research.

* 9 pages, 2 figures 
Viaarxiv icon

Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer

Dec 13, 2021
Yunyun Huang, Xiaoyu Shen, Chuanyi Li, Jidong Ge, Bin Luo

Figure 1 for Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer
Figure 2 for Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer
Figure 3 for Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer
Figure 4 for Dependency Learning for Legal Judgment Prediction with a Unified Text-to-Text Transformer

Given the fact of a case, Legal Judgment Prediction (LJP) involves a series of sub-tasks such as predicting violated law articles, charges and term of penalty. We propose leveraging a unified text-to-text Transformer for LJP, where the dependencies among sub-tasks can be naturally established within the auto-regressive decoder. Compared with previous works, it has three advantages: (1) it fits in the pretraining pattern of masked language models, and thereby can benefit from the semantic prompts of each sub-task rather than treating them as atomic labels, (2) it utilizes a single unified architecture, enabling full parameter sharing across all sub-tasks, and (3) it can incorporate both classification and generative sub-tasks. We show that this unified transformer, albeit pretrained on general-domain text, outperforms pretrained models tailored specifically for the legal domain. Through an extensive set of experiments, we find that the best order to capture dependencies is different from human intuitions, and the most reasonable logical order for humans can be sub-optimal for the model. We further include two more auxiliary tasks: court view generation and article content prediction, showing they can not only improve the prediction accuracy, but also provide interpretable explanations for model outputs even when an error is made. With the best configuration, our model outperforms both previous SOTA and a single-tasked version of the unified transformer by a large margin.

* The first two authors contributed equally 
Viaarxiv icon

AST-Transformer: Encoding Abstract Syntax Trees Efficiently for Code Summarization

Dec 02, 2021
Ze Tang, Chuanyi Li, Jidong Ge, Xiaoyu Shen, Zheling Zhu, Bin Luo

Figure 1 for AST-Transformer: Encoding Abstract Syntax Trees Efficiently for Code Summarization

Code summarization aims to generate brief natural language descriptions for source code. As source code is highly structured and follows strict programming language grammars, its Abstract Syntax Tree (AST) is often leveraged to inform the encoder about the structural information. However, ASTs are usually much longer than the source code. Current approaches ignore the size limit and simply feed the whole linearized AST into the encoder. To address this problem, we propose AST-Transformer to efficiently encode tree-structured ASTs. Experiments show that AST-Transformer outperforms the state-of-arts by a substantial margin while being able to reduce $90\sim95\%$ of the computational complexity in the encoding process.

Viaarxiv icon

Learning Fine-grained Fact-Article Correspondence in Legal Cases

Apr 24, 2021
Jidong Ge, Yunyun huang, Xiaoyu Shen, Chuanyi Li, Wei Hu, Bin Luo

Figure 1 for Learning Fine-grained Fact-Article Correspondence in Legal Cases
Figure 2 for Learning Fine-grained Fact-Article Correspondence in Legal Cases
Figure 3 for Learning Fine-grained Fact-Article Correspondence in Legal Cases
Figure 4 for Learning Fine-grained Fact-Article Correspondence in Legal Cases

Automatically recommending relevant law articles to a given legal case has attracted much attention as it can greatly release human labor from searching over the large database of laws. However, current researches only support coarse-grained recommendation where all relevant articles are predicted as a whole without explaining which specific fact each article is relevant with. Since one case can be formed of many supporting facts, traversing over them to verify the correctness of recommendation results can be time-consuming. We believe that learning fine-grained correspondence between each single fact and law articles is crucial for an accurate and trustworthy AI system. With this motivation, we perform a pioneering study and create a corpus with manually annotated fact-article correspondences. We treat the learning as a text matching task and propose a multi-level matching network to address it. To help the model better digest the content of law articles, we parse articles in form of premise-conclusion pairs with random forest. Experiments show that the parsed form yielded better performance and the resulting model surpassed other popular text matching baselines. Furthermore, we compare with previous researches and find that establishing the fine-grained fact-article correspondences can improve the recommendation accuracy by a large margin. Our best system reaches an F1 score of 96.3%, making it of great potential for practical use. It can also significantly boost the downstream task of legal decision prediction, increasing the F1 score by up to 12.7%.

* Code and dataset are available at https://github.com/gjdnju/MLMN 
Viaarxiv icon

Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse

Mar 22, 2021
Yuxiang Liu, Jidong Ge, Chuanyi Li, Jie Gui

Figure 1 for Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse
Figure 2 for Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse
Figure 3 for Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse
Figure 4 for Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse

Normalization operations are essential for state-of-the-art neural networks and enable us to train a network from scratch with a large learning rate (LR). We attempt to explain the real effect of Batch Normalization (BN) from the perspective of variance transmission by investigating the relationship between BN and Weights Normalization (WN). In this work, we demonstrate that the problem of the shift of the average gradient will amplify the variance of every convolutional (conv) layer. We propose Parametric Weights Standardization (PWS), a fast and robust to mini-batch size module used for conv filters, to solve the shift of the average gradient. PWS can provide the speed-up of BN. Besides, it has less computation and does not change the output of a conv layer. PWS enables the network to converge fast without normalizing the outputs. This result enhances the persuasiveness of the shift of the average gradient and explains why BN works from the perspective of variance transmission. The code and appendix will be made available on https://github.com/lyxzzz/PWSConv.

* This paper has been accepted by AAAI21 
Viaarxiv icon

Synergy between Machine/Deep Learning and Software Engineering: How Far Are We?

Aug 12, 2020
Simin Wang, Liguo Huang, Jidong Ge, Tengfei Zhang, Haitao Feng, Ming Li, He Zhang, Vincent Ng

Figure 1 for Synergy between Machine/Deep Learning and Software Engineering: How Far Are We?
Figure 2 for Synergy between Machine/Deep Learning and Software Engineering: How Far Are We?
Figure 3 for Synergy between Machine/Deep Learning and Software Engineering: How Far Are We?
Figure 4 for Synergy between Machine/Deep Learning and Software Engineering: How Far Are We?

Since 2009, the deep learning revolution, which was triggered by the introduction of ImageNet, has stimulated the synergy between Machine Learning (ML)/Deep Learning (DL) and Software Engineering (SE). Meanwhile, critical reviews have emerged that suggest that ML/DL should be used cautiously. To improve the quality (especially the applicability and generalizability) of ML/DL-related SE studies, and to stimulate and enhance future collaborations between SE/AI researchers and industry practitioners, we conducted a 10-year Systematic Literature Review (SLR) on 906 ML/DL-related SE papers published between 2009 and 2018. Our trend analysis demonstrated the mutual impacts that ML/DL and SE have had on each other. At the same time, however, we also observed a paucity of replicable and reproducible ML/DL-related SE studies and identified five factors that influence their replicability and reproducibility. To improve the applicability and generalizability of research results, we analyzed what ingredients in a study would facilitate an understanding of why a ML/DL technique was selected for a specific SE problem. In addition, we identified the unique trends of impacts of DL models on SE tasks, as well as five unique challenges that needed to be met in order to better leverage DL to improve the productivity of SE tasks. Finally, we outlined a road-map that we believe can facilitate the transfer of ML/DL-based SE research results into real-world industry practices.

Viaarxiv icon