Alert button
Picture for Tristan Naumann

Tristan Naumann

Alert button

Microsoft Research

Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing

Add code
Bookmark button
Alert button
Dec 15, 2021
Robert Tinn, Hao Cheng, Yu Gu, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon

Figure 1 for Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing
Figure 2 for Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing
Figure 3 for Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing
Figure 4 for Fine-Tuning Large Neural Language Models for Biomedical Natural Language Processing
Viaarxiv icon

Modular Self-Supervision for Document-Level Relation Extraction

Add code
Bookmark button
Alert button
Sep 11, 2021
Sheng Zhang, Cliff Wong, Naoto Usuyama, Sarthak Jain, Tristan Naumann, Hoifung Poon

Figure 1 for Modular Self-Supervision for Document-Level Relation Extraction
Figure 2 for Modular Self-Supervision for Document-Level Relation Extraction
Figure 3 for Modular Self-Supervision for Document-Level Relation Extraction
Figure 4 for Modular Self-Supervision for Document-Level Relation Extraction
Viaarxiv icon

Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature

Add code
Bookmark button
Alert button
Jun 25, 2021
Yu Wang, Jinchao Li, Tristan Naumann, Chenyan Xiong, Hao Cheng, Robert Tinn, Cliff Wong, Naoto Usuyama, Richard Rogahn, Zhihong Shen, Yang Qin, Eric Horvitz, Paul N. Bennett, Jianfeng Gao, Hoifung Poon

Figure 1 for Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
Figure 2 for Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
Figure 3 for Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
Figure 4 for Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
Viaarxiv icon

Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing

Add code
Bookmark button
Alert button
Aug 20, 2020
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, Hoifung Poon

Figure 1 for Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Figure 2 for Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Figure 3 for Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Figure 4 for Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing
Viaarxiv icon

ML4H Abstract Track 2019

Add code
Bookmark button
Alert button
Feb 05, 2020
Matthew B. A. McDermott, Emily Alsentzer, Sam Finlayson, Michael Oberst, Fabian Falck, Tristan Naumann, Brett K. Beaulieu-Jones, Adrian V. Dalca

Viaarxiv icon

Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation

Add code
Bookmark button
Alert button
Dec 04, 2019
Aparna Balagopalan, Jekaterina Novikova, Matthew B. A. McDermott, Bret Nestor, Tristan Naumann, Marzyeh Ghassemi

Figure 1 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 2 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 3 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Figure 4 for Cross-Language Aphasia Detection using Optimal Transport Domain Adaptation
Viaarxiv icon

Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks

Add code
Bookmark button
Alert button
Aug 02, 2019
Bret Nestor, Matthew B. A. McDermott, Willie Boag, Gabriela Berner, Tristan Naumann, Michael C. Hughes, Anna Goldenberg, Marzyeh Ghassemi

Figure 1 for Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks
Figure 2 for Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks
Figure 3 for Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks
Figure 4 for Feature Robustness in Non-stationary Health Records: Caveats to Deployable Model Performance in Common Clinical Machine Learning Tasks
Viaarxiv icon

MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III

Add code
Bookmark button
Alert button
Jul 19, 2019
Shirly Wang, Matthew B. A. McDermott, Geeticka Chauhan, Michael C. Hughes, Tristan Naumann, Marzyeh Ghassemi

Figure 1 for MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Figure 2 for MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Figure 3 for MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Figure 4 for MIMIC-Extract: A Data Extraction, Preprocessing, and Representation Pipeline for MIMIC-III
Viaarxiv icon