Picture for Rodrigo Nogueira

Rodrigo Nogueira

mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark

Add code
Sep 27, 2022
Figure 1 for mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark
Figure 2 for mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark
Viaarxiv icon

MonoByte: A Pool of Monolingual Byte-level Language Models

Add code
Sep 27, 2022
Figure 1 for MonoByte: A Pool of Monolingual Byte-level Language Models
Figure 2 for MonoByte: A Pool of Monolingual Byte-level Language Models
Figure 3 for MonoByte: A Pool of Monolingual Byte-level Language Models
Viaarxiv icon

Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models

Add code
Aug 24, 2022
Figure 1 for Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Figure 2 for Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Figure 3 for Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Figure 4 for Induced Natural Language Rationales and Interleaved Markup Tokens Enable Extrapolation in Large Language Models
Viaarxiv icon

A Boring-yet-effective Approach for the Product Ranking Task of the Amazon KDD Cup 2022

Add code
Aug 09, 2022
Figure 1 for A Boring-yet-effective Approach for the Product Ranking Task of the Amazon KDD Cup 2022
Viaarxiv icon

No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval

Add code
Jun 06, 2022
Figure 1 for No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
Figure 2 for No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
Figure 3 for No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
Figure 4 for No Parameter Left Behind: How Distillation and Model Size Affect Zero-Shot Retrieval
Viaarxiv icon

Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task

Add code
May 30, 2022
Figure 1 for Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task
Figure 2 for Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task
Figure 3 for Billions of Parameters Are Worth More Than In-domain Training Data: A case study in the Legal Case Entailment Task
Viaarxiv icon

InPars: Data Augmentation for Information Retrieval using Large Language Models

Add code
Feb 10, 2022
Figure 1 for InPars: Data Augmentation for Information Retrieval using Large Language Models
Figure 2 for InPars: Data Augmentation for Information Retrieval using Large Language Models
Figure 3 for InPars: Data Augmentation for Information Retrieval using Large Language Models
Figure 4 for InPars: Data Augmentation for Information Retrieval using Large Language Models
Viaarxiv icon

To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment

Add code
Feb 07, 2022
Figure 1 for To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment
Figure 2 for To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment
Figure 3 for To Tune or Not To Tune? Zero-shot Models for Legal Case Entailment
Viaarxiv icon

Sequence-to-Sequence Models for Extracting Information from Registration and Legal Documents

Add code
Jan 14, 2022
Figure 1 for Sequence-to-Sequence Models for Extracting Information from Registration and Legal Documents
Figure 2 for Sequence-to-Sequence Models for Extracting Information from Registration and Legal Documents
Figure 3 for Sequence-to-Sequence Models for Extracting Information from Registration and Legal Documents
Figure 4 for Sequence-to-Sequence Models for Extracting Information from Registration and Legal Documents
Viaarxiv icon

On the ability of monolingual models to learn language-agnostic representations

Add code
Sep 04, 2021
Figure 1 for On the ability of monolingual models to learn language-agnostic representations
Figure 2 for On the ability of monolingual models to learn language-agnostic representations
Figure 3 for On the ability of monolingual models to learn language-agnostic representations
Viaarxiv icon