Alert button
Picture for Doug Downey

Doug Downey

Alert button

Simplified Data Wrangling with ir_datasets

Mar 03, 2021
Sean MacAvaney, Andrew Yates, Sergey Feldman, Doug Downey, Arman Cohan, Nazli Goharian

Figure 1 for Simplified Data Wrangling with ir_datasets
Figure 2 for Simplified Data Wrangling with ir_datasets
Figure 3 for Simplified Data Wrangling with ir_datasets
Figure 4 for Simplified Data Wrangling with ir_datasets
Viaarxiv icon

ABNIRML: Analyzing the Behavior of Neural IR Models

Nov 02, 2020
Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, Arman Cohan

Figure 1 for ABNIRML: Analyzing the Behavior of Neural IR Models
Figure 2 for ABNIRML: Analyzing the Behavior of Neural IR Models
Figure 3 for ABNIRML: Analyzing the Behavior of Neural IR Models
Figure 4 for ABNIRML: Analyzing the Behavior of Neural IR Models
Viaarxiv icon

High-Precision Extraction of Emerging Concepts from Scientific Literature

Jun 11, 2020
Daniel King, Doug Downey, Daniel S. Weld

Figure 1 for High-Precision Extraction of Emerging Concepts from Scientific Literature
Figure 2 for High-Precision Extraction of Emerging Concepts from Scientific Literature
Figure 3 for High-Precision Extraction of Emerging Concepts from Scientific Literature
Viaarxiv icon

SPECTER: Document-level Representation Learning using Citation-informed Transformers

May 20, 2020
Arman Cohan, Sergey Feldman, Iz Beltagy, Doug Downey, Daniel S. Weld

Figure 1 for SPECTER: Document-level Representation Learning using Citation-informed Transformers
Figure 2 for SPECTER: Document-level Representation Learning using Citation-informed Transformers
Figure 3 for SPECTER: Document-level Representation Learning using Citation-informed Transformers
Figure 4 for SPECTER: Document-level Representation Learning using Citation-informed Transformers
Viaarxiv icon

Don't Stop Pretraining: Adapt Language Models to Domains and Tasks

May 05, 2020
Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, Noah A. Smith

Figure 1 for Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Figure 2 for Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Figure 3 for Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Figure 4 for Don't Stop Pretraining: Adapt Language Models to Domains and Tasks
Viaarxiv icon

Stolen Probability: A Structural Weakness of Neural Language Models

May 05, 2020
David Demeter, Gregory Kimmel, Doug Downey

Figure 1 for Stolen Probability: A Structural Weakness of Neural Language Models
Figure 2 for Stolen Probability: A Structural Weakness of Neural Language Models
Figure 3 for Stolen Probability: A Structural Weakness of Neural Language Models
Figure 4 for Stolen Probability: A Structural Weakness of Neural Language Models
Viaarxiv icon

G-DAUG: Generative Data Augmentation for Commonsense Reasoning

Apr 24, 2020
Yiben Yang, Chaitanya Malaviya, Jared Fernandez, Swabha Swayamdipta, Ronan Le Bras, Ji-Ping Wang, Chandra Bhagavatula, Yejin Choi, Doug Downey

Figure 1 for G-DAUG: Generative Data Augmentation for Commonsense Reasoning
Figure 2 for G-DAUG: Generative Data Augmentation for Commonsense Reasoning
Figure 3 for G-DAUG: Generative Data Augmentation for Commonsense Reasoning
Figure 4 for G-DAUG: Generative Data Augmentation for Commonsense Reasoning
Viaarxiv icon