Picture for Matthew B. A. McDermott

Matthew B. A. McDermott

Harvard Medical School

ACES: Automatic Cohort Extraction System for Event-Stream Datasets

Add code
Jun 28, 2024
Viaarxiv icon

A Closer Look at AUROC and AUPRC under Class Imbalance

Add code
Jan 11, 2024
Figure 1 for A Closer Look at AUROC and AUPRC under Class Imbalance
Figure 2 for A Closer Look at AUROC and AUPRC under Class Imbalance
Figure 3 for A Closer Look at AUROC and AUPRC under Class Imbalance
Figure 4 for A Closer Look at AUROC and AUPRC under Class Imbalance
Viaarxiv icon

Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events

Add code
Jun 21, 2023
Figure 1 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 2 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 3 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 4 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Viaarxiv icon

A collection of the accepted abstracts for the Machine Learning for Health symposium 2021

Add code
Nov 30, 2021
Viaarxiv icon

Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences

Add code
Mar 18, 2021
Figure 1 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 2 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 3 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 4 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Viaarxiv icon

Adversarial Contrastive Pre-training for Protein Sequences

Add code
Jan 31, 2021
Figure 1 for Adversarial Contrastive Pre-training for Protein Sequences
Figure 2 for Adversarial Contrastive Pre-training for Protein Sequences
Figure 3 for Adversarial Contrastive Pre-training for Protein Sequences
Viaarxiv icon

ML4H Abstract Track 2020

Add code
Nov 19, 2020
Viaarxiv icon

A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data

Add code
Jul 20, 2020
Figure 1 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 2 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 3 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 4 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Viaarxiv icon

CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output

Add code
Jun 26, 2020
Figure 1 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 2 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 3 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 4 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Viaarxiv icon

ML4H Abstract Track 2019

Add code
Feb 05, 2020
Viaarxiv icon