Alert button
Picture for Matthew B. A. McDermott

Matthew B. A. McDermott

Alert button

Harvard Medical School

A Closer Look at AUROC and AUPRC under Class Imbalance

Jan 11, 2024
Matthew B. A. McDermott, Lasse Hyldig Hansen, Haoran Zhang, Giovanni Angelotti, Jack Gallifant

Viaarxiv icon

Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events

Jun 21, 2023
Matthew B. A. McDermott, Bret Nestor, Peniel Argaw, Isaac Kohane

Figure 1 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 2 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 3 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Figure 4 for Event Stream GPT: A Data Pre-processing and Modeling Library for Generative, Pre-trained Transformers over Continuous-time Sequences of Complex Events
Viaarxiv icon

A collection of the accepted abstracts for the Machine Learning for Health (ML4H) symposium 2021

Nov 30, 2021
Fabian Falck, Yuyin Zhou, Emma Rocheteau, Liyue Shen, Luis Oala, Girmaw Abebe, Subhrajit Roy, Stephen Pfohl, Emily Alsentzer, Matthew B. A. McDermott

Viaarxiv icon

Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences

Mar 18, 2021
Matthew B. A. McDermott, Brendan Yap, Peter Szolovits, Marinka Zitnik

Figure 1 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 2 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 3 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Figure 4 for Rethinking Relational Encoding in Language Model: Pre-Training for General Sequences
Viaarxiv icon

Adversarial Contrastive Pre-training for Protein Sequences

Jan 31, 2021
Matthew B. A. McDermott, Brendan Yap, Harry Hsu, Di Jin, Peter Szolovits

Figure 1 for Adversarial Contrastive Pre-training for Protein Sequences
Figure 2 for Adversarial Contrastive Pre-training for Protein Sequences
Figure 3 for Adversarial Contrastive Pre-training for Protein Sequences
Viaarxiv icon

ML4H Abstract Track 2020

Nov 19, 2020
Emily Alsentzer, Matthew B. A. McDermott, Fabian Falck, Suproteem K. Sarkar, Subhrajit Roy, Stephanie L. Hyland

Viaarxiv icon

A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data

Jul 20, 2020
Matthew B. A. McDermott, Bret Nestor, Evan Kim, Wancong Zhang, Anna Goldenberg, Peter Szolovits, Marzyeh Ghassemi

Figure 1 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 2 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 3 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Figure 4 for A Comprehensive Evaluation of Multi-task Learning and Multi-task Pre-training on EHR Time-series Data
Viaarxiv icon

CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output

Jun 26, 2020
Matthew B. A. McDermott, Tzu Ming Harry Hsu, Wei-Hung Weng, Marzyeh Ghassemi, Peter Szolovits

Figure 1 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 2 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 3 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Figure 4 for CheXpert++: Approximating the CheXpert labeler for Speed,Differentiability, and Probabilistic Output
Viaarxiv icon

ML4H Abstract Track 2019

Feb 05, 2020
Matthew B. A. McDermott, Emily Alsentzer, Sam Finlayson, Michael Oberst, Fabian Falck, Tristan Naumann, Brett K. Beaulieu-Jones, Adrian V. Dalca

Viaarxiv icon

Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation

Dec 04, 2019
Aparna Balagopalan, Jekaterina Novikova, Matthew B. A. McDermott, Bret Nestor, Tristan Naumann, Marzyeh Ghassemi

Figure 1 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 2 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 3 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 4 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Viaarxiv icon