Alert button
Picture for Andreas Rücklé

Andreas Rücklé

Alert button

Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research

Add code
Bookmark button
Alert button
Jun 29, 2023
Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge

Figure 1 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 2 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 3 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Figure 4 for Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research
Viaarxiv icon

BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

Add code
Bookmark button
Alert button
Apr 28, 2021
Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych

Figure 1 for BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Figure 2 for BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Figure 3 for BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Figure 4 for BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models
Viaarxiv icon

Learning to Reason for Text Generation from Scientific Tables

Add code
Bookmark button
Alert button
Apr 16, 2021
Nafise Sadat Moosavi, Andreas Rücklé, Dan Roth, Iryna Gurevych

Figure 1 for Learning to Reason for Text Generation from Scientific Tables
Figure 2 for Learning to Reason for Text Generation from Scientific Tables
Figure 3 for Learning to Reason for Text Generation from Scientific Tables
Figure 4 for Learning to Reason for Text Generation from Scientific Tables
Viaarxiv icon

What to Pre-Train on? Efficient Intermediate Task Selection

Add code
Bookmark button
Alert button
Apr 16, 2021
Clifton Poth, Jonas Pfeiffer, Andreas Rücklé, Iryna Gurevych

Figure 1 for What to Pre-Train on? Efficient Intermediate Task Selection
Figure 2 for What to Pre-Train on? Efficient Intermediate Task Selection
Figure 3 for What to Pre-Train on? Efficient Intermediate Task Selection
Figure 4 for What to Pre-Train on? Efficient Intermediate Task Selection
Viaarxiv icon

TWEAC: Transformer with Extendable QA Agent Classifiers

Add code
Bookmark button
Alert button
Apr 14, 2021
Gregor Geigle, Nils Reimers, Andreas Rücklé, Iryna Gurevych

Figure 1 for TWEAC: Transformer with Extendable QA Agent Classifiers
Figure 2 for TWEAC: Transformer with Extendable QA Agent Classifiers
Figure 3 for TWEAC: Transformer with Extendable QA Agent Classifiers
Figure 4 for TWEAC: Transformer with Extendable QA Agent Classifiers
Viaarxiv icon

AdapterDrop: On the Efficiency of Adapters in Transformers

Add code
Bookmark button
Alert button
Oct 22, 2020
Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych

Figure 1 for AdapterDrop: On the Efficiency of Adapters in Transformers
Figure 2 for AdapterDrop: On the Efficiency of Adapters in Transformers
Figure 3 for AdapterDrop: On the Efficiency of Adapters in Transformers
Figure 4 for AdapterDrop: On the Efficiency of Adapters in Transformers
Viaarxiv icon

Improving QA Generalization by Concurrent Modeling of Multiple Biases

Add code
Bookmark button
Alert button
Oct 07, 2020
Mingzhu Wu, Nafise Sadat Moosavi, Andreas Rücklé, Iryna Gurevych

Figure 1 for Improving QA Generalization by Concurrent Modeling of Multiple Biases
Figure 2 for Improving QA Generalization by Concurrent Modeling of Multiple Biases
Figure 3 for Improving QA Generalization by Concurrent Modeling of Multiple Biases
Figure 4 for Improving QA Generalization by Concurrent Modeling of Multiple Biases
Viaarxiv icon

MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale

Add code
Bookmark button
Alert button
Oct 02, 2020
Andreas Rücklé, Jonas Pfeiffer, Iryna Gurevych

Figure 1 for MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Figure 2 for MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Figure 3 for MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Figure 4 for MultiCQA: Zero-Shot Transfer of Self-Supervised Text Matching Models on a Massive Scale
Viaarxiv icon

AdapterHub: A Framework for Adapting Transformers

Add code
Bookmark button
Alert button
Jul 15, 2020
Jonas Pfeiffer, Andreas Rücklé, Clifton Poth, Aishwarya Kamath, Ivan Vulić, Sebastian Ruder, Kyunghyun Cho, Iryna Gurevych

Figure 1 for AdapterHub: A Framework for Adapting Transformers
Figure 2 for AdapterHub: A Framework for Adapting Transformers
Figure 3 for AdapterHub: A Framework for Adapting Transformers
Figure 4 for AdapterHub: A Framework for Adapting Transformers
Viaarxiv icon