Alert button
Picture for Hong Lu

Hong Lu

Alert button

Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT by Integrating Neural Distance and Texture-Aware Transformer

Aug 01, 2023
Hexin Dong, Jiawen Yao, Yuxing Tang, Mingze Yuan, Yingda Xia, Jian Zhou, Hong Lu, Jingren Zhou, Bin Dong, Le Lu, Li Zhang, Zaiyi Liu, Yu Shi, Ling Zhang

Figure 1 for Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT by Integrating Neural Distance and Texture-Aware Transformer
Figure 2 for Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT by Integrating Neural Distance and Texture-Aware Transformer
Figure 3 for Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT by Integrating Neural Distance and Texture-Aware Transformer
Figure 4 for Improved Prognostic Prediction of Pancreatic Cancer Using Multi-Phase CT by Integrating Neural Distance and Texture-Aware Transformer

Pancreatic ductal adenocarcinoma (PDAC) is a highly lethal cancer in which the tumor-vascular involvement greatly affects the resectability and, thus, overall survival of patients. However, current prognostic prediction methods fail to explicitly and accurately investigate relationships between the tumor and nearby important vessels. This paper proposes a novel learnable neural distance that describes the precise relationship between the tumor and vessels in CT images of different patients, adopting it as a major feature for prognosis prediction. Besides, different from existing models that used CNNs or LSTMs to exploit tumor enhancement patterns on dynamic contrast-enhanced CT imaging, we improved the extraction of dynamic tumor-related texture features in multi-phase contrast-enhanced CT by fusing local and global features using CNN and transformer modules, further enhancing the features extracted across multi-phase CT images. We extensively evaluated and compared the proposed method with existing methods in the multi-center (n=4) dataset with 1,070 patients with PDAC, and statistical analysis confirmed its clinical effectiveness in the external test set consisting of three centers. The developed risk marker was the strongest predictor of overall survival among preoperative factors and it has the potential to be combined with established clinical factors to select patients at higher risk who might benefit from neoadjuvant therapy.

* MICCAI 2023 
Viaarxiv icon

Utilizing Large Language Models for Natural Interface to Pharmacology Databases

Jul 26, 2023
Hong Lu, Chuan Li, Yinheng Li, Jie Zhao

Figure 1 for Utilizing Large Language Models for Natural Interface to Pharmacology Databases
Figure 2 for Utilizing Large Language Models for Natural Interface to Pharmacology Databases

The drug development process necessitates that pharmacologists undertake various tasks, such as reviewing literature, formulating hypotheses, designing experiments, and interpreting results. Each stage requires accessing and querying vast amounts of information. In this abstract, we introduce a Large Language Model (LLM)-based Natural Language Interface designed to interact with structured information stored in databases. Our experiments demonstrate the feasibility and effectiveness of the proposed framework. This framework can generalize to query a wide range of pharmaceutical data and knowledge bases.

* BIOKDD 2023 abstract track 
Viaarxiv icon

A deep local attention network for pre-operative lymph node metastasis prediction in pancreatic cancer via multiphase CT imaging

Jan 04, 2023
Zhilin Zheng, Xu Fang, Jiawen Yao, Mengmeng Zhu, Le Lu, Lingyun Huang, Jing Xiao, Yu Shi, Hong Lu, Jianping Lu, Ling Zhang, Chengwei Shao, Yun Bian

Figure 1 for A deep local attention network for pre-operative lymph node metastasis prediction in pancreatic cancer via multiphase CT imaging
Figure 2 for A deep local attention network for pre-operative lymph node metastasis prediction in pancreatic cancer via multiphase CT imaging
Figure 3 for A deep local attention network for pre-operative lymph node metastasis prediction in pancreatic cancer via multiphase CT imaging
Figure 4 for A deep local attention network for pre-operative lymph node metastasis prediction in pancreatic cancer via multiphase CT imaging

Lymph node (LN) metastasis status is one of the most critical prognostic and cancer staging factors for patients with resectable pancreatic ductal adenocarcinoma (PDAC), or in general, for any types of solid malignant tumors. Preoperative prediction of LN metastasis from non-invasive CT imaging is highly desired, as it might be straightforwardly used to guide the following neoadjuvant treatment decision and surgical planning. Most studies only capture the tumor characteristics in CT imaging to implicitly infer LN metastasis and very few work exploit direct LN's CT imaging information. To the best of our knowledge, this is the first work to propose a fully-automated LN segmentation and identification network to directly facilitate the LN metastasis status prediction task. Nevertheless LN segmentation/detection is very challenging since LN can be easily confused with other hard negative anatomic structures (e.g., vessels) from radiological images. We explore the anatomical spatial context priors of pancreatic LN locations by generating a guiding attention map from related organs and vessels to assist segmentation and infer LN status. As such, LN segmentation is impelled to focus on regions that are anatomically adjacent or plausible with respect to the specific organs and vessels. The metastasized LN identification network is trained to classify the segmented LN instances into positives or negatives by reusing the segmentation network as a pre-trained backbone and padding a new classification head. More importantly, we develop a LN metastasis status prediction network that combines the patient-wise aggregation results of LN segmentation/identification and deep imaging features extracted from the tumor region. Extensive quantitative nested five-fold cross-validation is conducted on a discovery dataset of 749 patients with PDAC.

* 14 pages,5 figures 
Viaarxiv icon

Interactive Data Analysis with Next-step Natural Language Query Recommendation

Jan 13, 2022
Xingbo Wang, Furui Cheng, Yong Wang, Ke Xu, Jiang Long, Hong Lu, Huamin Qu

Figure 1 for Interactive Data Analysis with Next-step Natural Language Query Recommendation
Figure 2 for Interactive Data Analysis with Next-step Natural Language Query Recommendation
Figure 3 for Interactive Data Analysis with Next-step Natural Language Query Recommendation
Figure 4 for Interactive Data Analysis with Next-step Natural Language Query Recommendation

Natural language interfaces (NLIs) provide users with a convenient way to interactively analyze data through natural language queries. Nevertheless, interactive data analysis is a demanding process, especially for novice data analysts. When exploring large and complex datasets from different domains, data analysts do not necessarily have sufficient knowledge about data and application domains. It makes them unable to efficiently elicit a series of queries and extensively derive desirable data insights. In this paper, we develop an NLI with a step-wise query recommendation module to assist users in choosing appropriate next-step exploration actions. The system adopts a data-driven approach to generate step-wise semantically relevant and context-aware query suggestions for application domains of users' interest based on their query logs. Also, the system helps users organize query histories and results into a dashboard to communicate the discovered data insights. With a comparative user study, we show that our system can facilitate a more effective and systematic data analysis process than a baseline without the recommendation module.

* 21 pages, 6 figures 
Viaarxiv icon

TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis

Dec 03, 2021
Wu Ran, Bohong Yang, Peirong Ma, Hong Lu

Figure 1 for TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis
Figure 2 for TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis
Figure 3 for TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis
Figure 4 for TRNR: Task-Driven Image Rain and Noise Removal with a Few Images Based on Patch Analysis

The recent prosperity of learning-based image rain and noise removal is mainly due to the well-designed neural network architectures and large labeled datasets. However, we find that current image rain and noise removal methods result in low utilization of images. To alleviate the reliance on large labeled datasets, we propose the task-driven image rain and noise removal (TRNR) based on the introduced patch analysis strategy. The patch analysis strategy provides image patches with various spatial and statistical properties for training and has been verified to increase the utilization of images. Further, the patch analysis strategy motivates us to consider learning image rain and noise removal task-driven instead of data-driven. Therefore we introduce the N-frequency-K-shot learning task for TRNR. Each N-frequency-K-shot learning task is based on a tiny dataset containing NK image patches sampled by the patch analysis strategy. TRNR enables neural networks to learn from abundant N-frequency-K-shot learning tasks other than from adequate data. To verify the effectiveness of TRNR, we build a light Multi-Scale Residual Network (MSResNet) with about 0.9M parameters to learn image rain removal and use a simple ResNet with about 1.2M parameters dubbed DNNet for blind gaussian noise removal with a few images (for example, 20.0% train-set of Rain100H). Experimental results demonstrate that TRNR enables MSResNet to learn better from fewer images. In addition, MSResNet and DNNet utilizing TRNR have obtained better performance than most recent deep learning methods trained data-driven on large labeled datasets. These experimental results have confirmed the effectiveness and superiority of the proposed TRNR. The codes of TRNR will be public soon.

* 13 pages 
Viaarxiv icon

ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval

Nov 24, 2021
Hao Ren, Ziqiang Zheng, Yang Wu, Hong Lu, Yang Yang, Sai-Kit Yeung

Figure 1 for ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval
Figure 2 for ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval
Figure 3 for ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval
Figure 4 for ACNet: Approaching-and-Centralizing Network for Zero-Shot Sketch-Based Image Retrieval

The huge domain gap between sketches and photos and the highly abstract sketch representations pose challenges for sketch-based image retrieval (\underline{SBIR}). The zero-shot sketch-based image retrieval (\underline{ZS-SBIR}) is more generic and practical but poses an even greater challenge because of the additional knowledge gap between the seen and unseen categories. To simultaneously mitigate both gaps, we propose an \textbf{A}pproaching-and-\textbf{C}entralizing \textbf{Net}work (termed ``\textbf{ACNet}'') to jointly optimize sketch-to-photo synthesis and the image retrieval. The retrieval module guides the synthesis module to generate large amounts of diverse photo-like images which gradually approach the photo domain, and thus better serve the retrieval module than ever to learn domain-agnostic representations and category-agnostic common knowledge for generalizing to unseen categories. These diverse images generated with retrieval guidance can effectively alleviate the overfitting problem troubling concrete category-specific training samples with high gradients. We also discover the use of proxy-based NormSoftmax loss is effective in the zero-shot setting because its centralizing effect can stabilize our joint training and promote the generalization ability to unseen categories. Our approach is simple yet effective, which achieves state-of-the-art performance on two widely used ZS-SBIR datasets and surpasses previous methods by a large margin.

Viaarxiv icon

Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification

Jan 29, 2019
Hao Ren, Jianlin Su, Hong Lu

Figure 1 for Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification
Figure 2 for Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification
Figure 3 for Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification
Figure 4 for Evaluating Generalization Ability of Convolutional Neural Networks and Capsule Networks for Image Classification via Top-2 Classification

Image classification is a challenging problem which aims to identify the category of object in the image. In recent years, deep Convolutional Neural Networks (CNNs) have been applied to handle this task, and impressive improvement has been achieved. However, some research showed the output of CNNs can be easily altered by adding relatively small perturbations to the input image, such as modifying few pixels. Recently, Capsule Networks (CapsNets) are proposed, which can help eliminating this limitation. Experiments on MNIST dataset revealed that capsules can better characterize the features of object than CNNs. But it's hard to find a suitable quantitative method to compare the generalization ability of CNNs and CapsNets. In this paper, we propose a new image classification task called Top-2 classification to evaluate the generalization ability of CNNs and CapsNets. The models are trained on single label image samples same as the traditional image classification task. But in the test stage, we randomly concatenate two test image samples which contain different labels, and then use the trained models to predict the top-2 labels on the unseen newly-created two label image samples. This task can provide us precise quantitative results to compare the generalization ability of CNNs and CapsNets. Back to the CapsNet, because it uses Full Connectivity (FC) mechanism among all capsules, it requires many parameters. To reduce the number of parameters, we introduce the Parameter-Sharing (PS) mechanism between capsules. Experiments on five widely used benchmark image datasets demonstrate the method significantly reduces the number of parameters, without losing the effectiveness of extracting features. Further, on the Top-2 classification task, the proposed PS CapsNets obtain impressive higher accuracy compared to the traditional CNNs and FC CapsNets by a large margin.

Viaarxiv icon

Compositional coding capsule network with k-means routing for text classification

Oct 29, 2018
Hao Ren, Hong Lu

Figure 1 for Compositional coding capsule network with k-means routing for text classification
Figure 2 for Compositional coding capsule network with k-means routing for text classification
Figure 3 for Compositional coding capsule network with k-means routing for text classification
Figure 4 for Compositional coding capsule network with k-means routing for text classification

Text classification is a challenging problem which aims to identify the category of texts. Recently, Capsule Networks (CapsNets) are proposed for image classification. It has been shown that CapsNets have several advantages over Convolutional Neural Networks (CNNs), while, their validity in the domain of text has less been explored. An effective method named deep compositional code learning has been proposed lately. This method can save many parameters about word embeddings without any significant sacrifices in performance. In this paper, we introduce the Compositional Coding (CC) mechanism between capsules, and we propose a new routing algorithm, which is based on k-means clustering theory. Experiments conducted on eight challenging text classification datasets show the proposed method achieves competitive accuracy compared to the state-of-the-art approach with significantly fewer parameters.

Viaarxiv icon