Alert button
Picture for Sanjeev Kumar Karn

Sanjeev Kumar Karn

Alert button

Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge

Jun 18, 2023
Manuela Daniela Danu, George Marica, Sanjeev Kumar Karn, Bogdan Georgescu, Awais Mansoor, Florin Ghesu, Lucian Mihai Itu, Constantin Suciu, Sasa Grbic, Oladimeji Farri, Dorin Comaniciu

Figure 1 for Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge
Figure 2 for Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge
Figure 3 for Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge
Figure 4 for Generation of Radiology Findings in Chest X-Ray by Leveraging Collaborative Knowledge

Among all the sub-sections in a typical radiology report, the Clinical Indications, Findings, and Impression often reflect important details about the health status of a patient. The information included in Impression is also often covered in Findings. While Findings and Impression can be deduced by inspecting the image, Clinical Indications often require additional context. The cognitive task of interpreting medical images remains the most critical and often time-consuming step in the radiology workflow. Instead of generating an end-to-end radiology report, in this paper, we focus on generating the Findings from automated interpretation of medical images, specifically chest X-rays (CXRs). Thus, this work focuses on reducing the workload of radiologists who spend most of their time either writing or narrating the Findings. Unlike past research, which addresses radiology report generation as a single-step image captioning task, we have further taken into consideration the complexity of interpreting CXR images and propose a two-step approach: (a) detecting the regions with abnormalities in the image, and (b) generating relevant text for regions with abnormalities by employing a generative large language model (LLM). This two-step approach introduces a layer of interpretability and aligns the framework with the systematic reasoning that radiologists use when reviewing a CXR.

* Information Technology and Quantitative Management (ITQM 2023  
* Information Technology and Quantitative Management (ITQM 2023) 
Viaarxiv icon

shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation

Jun 05, 2023
Sanjeev Kumar Karn, Rikhiya Ghosh, Kusuma P, Oladimeji Farri

Figure 1 for shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation
Figure 2 for shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation
Figure 3 for shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation
Figure 4 for shs-nlp at RadSum23: Domain-Adaptive Pre-training of Instruction-tuned LLMs for Radiology Report Impression Generation

Instruction-tuned generative Large language models (LLMs) like ChatGPT and Bloomz possess excellent generalization abilities, but they face limitations in understanding radiology reports, particularly in the task of generating the IMPRESSIONS section from the FINDINGS section. They tend to generate either verbose or incomplete IMPRESSIONS, mainly due to insufficient exposure to medical text data during training. We present a system which leverages large-scale medical text data for domain-adaptive pre-training of instruction-tuned LLMs to enhance its medical knowledge and performance on specific medical tasks. We show that this system performs better in a zero-shot setting than a number of pretrain-and-finetune adaptation methods on the IMPRESSIONS generation task, and ranks 1st among participating systems in Task 1B: Radiology Report Summarization at the BioNLP 2023 workshop.

* BioNLP 2023, Co-located with ACL 2023  
* 1st Place in Task 1B: Radiology Report Summarization at BioNLP 2023 
Viaarxiv icon

RadLing: Towards Efficient Radiology Report Understanding

Jun 04, 2023
Rikhiya Ghosh, Sanjeev Kumar Karn, Manuela Daniela Danu, Larisa Micu, Ramya Vunikili, Oladimeji Farri

Figure 1 for RadLing: Towards Efficient Radiology Report Understanding
Figure 2 for RadLing: Towards Efficient Radiology Report Understanding
Figure 3 for RadLing: Towards Efficient Radiology Report Understanding
Figure 4 for RadLing: Towards Efficient Radiology Report Understanding

Most natural language tasks in the radiology domain use language models pre-trained on biomedical corpus. There are few pretrained language models trained specifically for radiology, and fewer still that have been trained in a low data setting and gone on to produce comparable results in fine-tuning tasks. We present RadLing, a continuously pretrained language model using Electra-small (Clark et al., 2020) architecture, trained using over 500K radiology reports, that can compete with state-of-the-art results for fine tuning tasks in radiology domain. Our main contribution in this paper is knowledge-aware masking which is a taxonomic knowledge-assisted pretraining task that dynamically masks tokens to inject knowledge during pretraining. In addition, we also introduce an knowledge base-aided vocabulary extension to adapt the general tokenization vocabulary to radiology domain.

* 61st Annual Meeting of the Association for Computational Linguistics (ACL), July 9-14, 2023, Toronto, Canada  
* Association for Computational Linguistics (ACL), 2023 
Viaarxiv icon

Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization

Mar 15, 2022
Sanjeev Kumar Karn, Ning Liu, Hinrich Schuetze, Oladimeji Farri

Figure 1 for Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization
Figure 2 for Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization
Figure 3 for Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization
Figure 4 for Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization

The IMPRESSIONS section of a radiology report about an imaging study is a summary of the radiologist's reasoning and conclusions, and it also aids the referring physician in confirming or excluding certain diagnoses. A cascade of tasks are required to automatically generate an abstractive summary of the typical information-rich radiology report. These tasks include acquisition of salient content from the report and generation of a concise, easily consumable IMPRESSIONS section. Prior research on radiology report summarization has focused on single-step end-to-end models -- which subsume the task of salient content acquisition. To fully explore the cascade structure and explainability of radiology report summarization, we introduce two innovations. First, we design a two-step approach: extractive summarization followed by abstractive summarization. Second, we additionally break down the extractive part into two independent tasks: extraction of salient (1) sentences and (2) keywords. Experiments on a publicly available radiology report dataset show our novel approach leads to a more precise summary compared to single-step and to two-step-with-single-extractive-process baselines with an overall improvement in F1 score Of 3-4%.

* 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland, 2022  
* Accepted at 60th Annual Meeting of the Association for Computational Linguistics 2022 Main Conference 
Viaarxiv icon

Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

Mar 08, 2021
Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schuetze

Figure 1 for Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data
Figure 2 for Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data
Figure 3 for Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data
Figure 4 for Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data

Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While end-to-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an end-to-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22%. We also compare against transformer models and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.

* Adapt-NLP: The Second Workshop on Domain Adaptation for NLP 
Viaarxiv icon

Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

Jun 05, 2019
Sanjeev Kumar Karn, Francine Chen, Yan-Ying Chen, Ulli Waltinger, Hinrich Schütze

Figure 1 for Generating Multi-Sentence Abstractive Summaries of Interleaved Texts
Figure 2 for Generating Multi-Sentence Abstractive Summaries of Interleaved Texts
Figure 3 for Generating Multi-Sentence Abstractive Summaries of Interleaved Texts
Figure 4 for Generating Multi-Sentence Abstractive Summaries of Interleaved Texts

In multi-participant postings, as in online chat conversations, several conversations or topic threads may take place concurrently. This leads to difficulties for readers reviewing the postings in not only following discussions but also in quickly identifying their essence. A two-step process, disentanglement of interleaved posts followed by summarization of each thread, addresses the issue, but disentanglement errors are propagated to the summarization step, degrading the overall performance. To address this, we propose an end-to-end trainable encoder-decoder network for summarizing interleaved posts. The interleaved posts are encoded hierarchically, i.e., word-to-word (words in a post) followed by post-to-post (posts in a channel). The decoder also generates summaries hierarchically, thread-to-thread (generate thread representations) followed by word-to-word (i.e., generate summary words). Additionally, we propose a hierarchical attention mechanism for interleaved text. Overall, our end-to-end trainable hierarchical framework enhances performance over a sequence to sequence framework by 8% on a synthetic interleaved texts dataset.

Viaarxiv icon

News Article Teaser Tweets and How to Generate Them

Jul 30, 2018
Sanjeev Kumar Karn, Mark Buckley, Ulli Waltinger, Hinrich Schütze

Figure 1 for News Article Teaser Tweets and How to Generate Them
Figure 2 for News Article Teaser Tweets and How to Generate Them
Figure 3 for News Article Teaser Tweets and How to Generate Them
Figure 4 for News Article Teaser Tweets and How to Generate Them

We define the task of teaser generation and provide an evaluation benchmark and baseline systems for it. A teaser is a short reading suggestion for an article that is illustrative and includes curiosity-arousing elements to entice potential readers to read the news item. Teasers are one of the main vehicles for transmitting news to social media users. We compile a novel dataset of teasers by systematically accumulating tweets and selecting ones that conform to the teaser definition. We compare a number of neural abstractive architectures on the task of teaser generation and the overall best performing system is See et al.(2017)'s seq2seq with pointer network.

* 11 pages 
Viaarxiv icon