Alert button
Picture for Chunlin Li

Chunlin Li

Alert button

Evaluating Large Language Models for Radiology Natural Language Processing

Jul 27, 2023
Zhengliang Liu, Tianyang Zhong, Yiwei Li, Yutong Zhang, Yi Pan, Zihao Zhao, Peixin Dong, Chao Cao, Yuxiao Liu, Peng Shu, Yaonai Wei, Zihao Wu, Chong Ma, Jiaqi Wang, Sheng Wang, Mengyue Zhou, Zuowei Jiang, Chunlin Li, Jason Holmes, Shaochen Xu, Lu Zhang, Haixing Dai, Kai Zhang, Lin Zhao, Yuanhao Chen, Xu Liu, Peilong Wang, Pingkun Yan, Jun Liu, Bao Ge, Lichao Sun, Dajiang Zhu, Xiang Li, Wei Liu, Xiaoyan Cai, Xintao Hu, Xi Jiang, Shu Zhang, Xin Zhang, Tuo Zhang, Shijie Zhao, Quanzheng Li, Hongtu Zhu, Dinggang Shen, Tianming Liu

Figure 1 for Evaluating Large Language Models for Radiology Natural Language Processing
Figure 2 for Evaluating Large Language Models for Radiology Natural Language Processing
Figure 3 for Evaluating Large Language Models for Radiology Natural Language Processing
Figure 4 for Evaluating Large Language Models for Radiology Natural Language Processing

The rise of large language models (LLMs) has marked a pivotal shift in the field of natural language processing (NLP). LLMs have revolutionized a multitude of domains, and they have made a significant impact in the medical field. Large language models are now more abundant than ever, and many of these models exhibit bilingual capabilities, proficient in both English and Chinese. However, a comprehensive evaluation of these models remains to be conducted. This lack of assessment is especially apparent within the context of radiology NLP. This study seeks to bridge this gap by critically evaluating thirty two LLMs in interpreting radiology reports, a crucial component of radiology NLP. Specifically, the ability to derive impressions from radiologic findings is assessed. The outcomes of this evaluation provide key insights into the performance, strengths, and weaknesses of these LLMs, informing their practical applications within the medical domain.

Viaarxiv icon

ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting

Mar 23, 2023
Ruofan Liang, Huiting Chen, Chunlin Li, Fan Chen, Selvakumar Panneer, Nandita Vijaykumar

Figure 1 for ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting
Figure 2 for ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting
Figure 3 for ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting
Figure 4 for ENVIDR: Implicit Differentiable Renderer with Neural Environment Lighting

Recent advances in neural rendering have shown great potential for reconstructing scenes from multiview images. However, accurately representing objects with glossy surfaces remains a challenge for existing methods. In this work, we introduce ENVIDR, a rendering and modeling framework for high-quality rendering and reconstruction of surfaces with challenging specular reflections. To achieve this, we first propose a novel neural renderer with decomposed rendering components to learn the interaction between surface and environment lighting. This renderer is trained using existing physically based renderers and is decoupled from actual scene representations. We then propose an SDF-based neural surface model that leverages this learned neural renderer to represent general scenes. Our model additionally synthesizes indirect illuminations caused by inter-reflections from shiny surfaces by marching surface-reflected rays. We demonstrate that our method outperforms state-of-art methods on challenging shiny scenes, providing high-quality rendering of specular reflections while also enabling material editing and scene relighting.

* Project page: https://nexuslrf.github.io/ENVIDR/ 
Viaarxiv icon

Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation

Nov 18, 2022
Ben Dai, Xiaotong Shen, Lin Yee Chen, Chunlin Li, Wei Pan

Figure 1 for Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation
Figure 2 for Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation
Figure 3 for Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation
Figure 4 for Data-Adaptive Discriminative Feature Localization with Statistically Guaranteed Interpretation

In explainable artificial intelligence, discriminative feature localization is critical to reveal a blackbox model's decision-making process from raw data to prediction. In this article, we use two real datasets, the MNIST handwritten digits and MIT-BIH Electrocardiogram (ECG) signals, to motivate key characteristics of discriminative features, namely adaptiveness, predictive importance and effectiveness. Then, we develop a localization framework based on adversarial attacks to effectively localize discriminative features. In contrast to existing heuristic methods, we also provide a statistically guaranteed interpretability of the localized features by measuring a generalized partial $R^2$. We apply the proposed method to the MNIST dataset and the MIT-BIH dataset with a convolutional auto-encoder. In the first, the compact image regions localized by the proposed method are visually appealing. Similarly, in the second, the identified ECG features are biologically plausible and consistent with cardiac electrophysiological principles while locating subtle anomalies in a QRS complex that may not be discernible by the naked eye. Overall, the proposed method compares favorably with state-of-the-art competitors. Accompanying this paper is a Python library dnn-locate (https://dnn-locate.readthedocs.io/en/latest/) that implements the proposed approach.

* The Annals of Applied Statistics, 2022  
* 27 pages, 11 figures 
Viaarxiv icon

RankSEG: A Consistent Ranking-based Framework for Segmentation

Jun 27, 2022
Ben Dai, Chunlin Li

Figure 1 for RankSEG: A Consistent Ranking-based Framework for Segmentation
Figure 2 for RankSEG: A Consistent Ranking-based Framework for Segmentation
Figure 3 for RankSEG: A Consistent Ranking-based Framework for Segmentation
Figure 4 for RankSEG: A Consistent Ranking-based Framework for Segmentation

Segmentation has emerged as a fundamental field of computer vision and natural language processing, which assigns a label to every pixel/feature to extract regions of interest from an image/text. To evaluate the performance of segmentation, the Dice and IoU metrics are used to measure the degree of overlap between the ground truth and the predicted segmentation. In this paper, we establish a theoretical foundation of segmentation with respect to the Dice/IoU metrics, including the Bayes rule and Dice/IoU-calibration, analogous to classification-calibration or Fisher consistency in classification. We prove that the existing thresholding-based framework with most operating losses are not consistent with respect to the Dice/IoU metrics, and thus may lead to a suboptimal solution. To address this pitfall, we propose a novel consistent ranking-based framework, namely RankDice/RankIoU, inspired by plug-in rules of the Bayes segmentation rule. Three numerical algorithms with GPU parallel execution are developed to implement the proposed framework in large-scale and high-dimensional segmentation. We study statistical properties of the proposed framework. We show it is Dice-/IoU-calibrated, and its excess risk bounds and the rate of convergence are also provided. The numerical effectiveness of RankDice/mRankDice is demonstrated in various simulated examples and Fine-annotated CityScapes and Pascal VOC datasets with state-of-the-art deep learning architectures.

* 41 pages 
Viaarxiv icon