Alert button
Picture for Sudarshan Srinivasan

Sudarshan Srinivasan

Alert button

Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach

Add code
Bookmark button
Alert button
Mar 18, 2024
Maria Mahbub, Gregory M. Dams, Sudarshan Srinivasan, Caitlin Rizy, Ioana Danciu, Jodie Trafton, Kathryn Knight

Figure 1 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 2 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 3 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 4 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Viaarxiv icon

Dynamic Q&A of Clinical Documents with Large Language Models

Add code
Bookmark button
Alert button
Jan 19, 2024
Ran Elgedawy, Sudarshan Srinivasan, Ioana Danciu

Viaarxiv icon

Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes

Add code
Bookmark button
Alert button
May 15, 2023
Maria Mahbub, Ian Goethert, Ioana Danciu, Kathryn Knight, Sudarshan Srinivasan, Suzanne Tamang, Karine Rozenberg-Ben-Dror, Hugo Solares, Susana Martins, Edmon Begoli, Gregory D. Peterson

Figure 1 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 2 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 3 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 4 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Viaarxiv icon

TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training

Add code
Bookmark button
Alert button
Apr 11, 2023
William Won, Midhilesh Elavazhagan, Sudarshan Srinivasan, Ajaya Durg, Swati Gupta, Tushar Krishna

Figure 1 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 2 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 3 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 4 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Viaarxiv icon

ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale

Add code
Bookmark button
Alert button
Mar 24, 2023
William Won, Taekyung Heo, Saeed Rashidi, Srinivas Sridharan, Sudarshan Srinivasan, Tushar Krishna

Figure 1 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 2 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 3 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 4 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Viaarxiv icon

BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task

Add code
Bookmark button
Alert button
Feb 26, 2022
Maria Mahbub, Sudarshan Srinivasan, Edmon Begoli, Gregory D Peterson

Figure 1 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 2 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 3 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 4 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Viaarxiv icon

Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models

Add code
Bookmark button
Alert button
Oct 09, 2021
Saeed Rashidi, William Won, Sudarshan Srinivasan, Srinivas Sridharan, Tushar Krishna

Figure 1 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 2 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 3 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 4 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Viaarxiv icon

Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models

Add code
Bookmark button
Alert button
Sep 24, 2021
William Won, Saeed Rashidi, Sudarshan Srinivasan, Tushar Krishna

Figure 1 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 2 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 3 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 4 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Viaarxiv icon

The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations

Add code
Bookmark button
Alert button
Feb 23, 2021
Jeremiah Duncan, Fabian Fallas, Chris Gropp, Emily Herron, Maria Mahbub, Paula Olaya, Eduardo Ponce, Tabitha K. Samuel, Daniel Schultz, Sudarshan Srinivasan, Maofeng Tang, Viktor Zenkov, Quan Zhou, Edmon Begoli

Figure 1 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 2 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 3 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 4 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Viaarxiv icon

Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures

Add code
Bookmark button
Alert button
May 10, 2020
Dhiraj Kalamkar, Evangelos Georganas, Sudarshan Srinivasan, Jianping Chen, Mikhail Shiryaev, Alexander Heinecke

Figure 1 for Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures
Figure 2 for Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures
Figure 3 for Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures
Figure 4 for Optimizing Deep Learning Recommender Systems' Training On CPU Cluster Architectures
Viaarxiv icon