Picture for Sudarshan Srinivasan

Sudarshan Srinivasan

Demystifying Platform Requirements for Diverse LLM Inference Use Cases

Add code
Jun 03, 2024
Viaarxiv icon

Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach

Add code
Mar 18, 2024
Figure 1 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 2 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 3 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Figure 4 for Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach
Viaarxiv icon

Dynamic Q&A of Clinical Documents with Large Language Models

Jan 19, 2024
Viaarxiv icon

Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes

May 15, 2023
Figure 1 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 2 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 3 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Figure 4 for Question-Answering System Extracts Information on Injection Drug Use from Clinical Progress Notes
Viaarxiv icon

TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training

Apr 11, 2023
Figure 1 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 2 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 3 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Figure 4 for TACOS: Topology-Aware Collective Algorithm Synthesizer for Distributed Training
Viaarxiv icon

ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale

Add code
Mar 24, 2023
Figure 1 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 2 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 3 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Figure 4 for ASTRA-sim2.0: Modeling Hierarchical Networks and Disaggregated Systems for Large-model Training at Scale
Viaarxiv icon

BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task

Add code
Feb 26, 2022
Figure 1 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 2 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 3 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Figure 4 for BioADAPT-MRC: Adversarial Learning-based Domain Adaptation Improves Biomedical Machine Reading Comprehension Task
Viaarxiv icon

Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models

Oct 09, 2021
Figure 1 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 2 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 3 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Figure 4 for Themis: A Network Bandwidth-Aware Collective Scheduling Policy for Distributed Training of DL Models
Viaarxiv icon

Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models

Add code
Sep 24, 2021
Figure 1 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 2 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 3 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Figure 4 for Exploring Multi-dimensional Hierarchical Network Topologies for Efficient Distributed Training of Trillion Parameter DL Models
Viaarxiv icon

The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations

Feb 23, 2021
Figure 1 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 2 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 3 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Figure 4 for The Sensitivity of Word Embeddings-based Author Detection Models to Semantic-preserving Adversarial Perturbations
Viaarxiv icon