Picture for Alessandro Sordoni

Alessandro Sordoni

Multi-Head Adapter Routing for Data-Efficient Fine-Tuning

Add code
Nov 07, 2022
Viaarxiv icon

Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning

Add code
Jun 02, 2022
Figure 1 for Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning
Figure 2 for Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning
Figure 3 for Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning
Figure 4 for Expressiveness and Learnability: A Unifying View for Evaluating Self-Supervised Learning
Viaarxiv icon

Evaluating Distributional Distortion in Neural Language Modeling

Add code
Mar 24, 2022
Figure 1 for Evaluating Distributional Distortion in Neural Language Modeling
Figure 2 for Evaluating Distributional Distortion in Neural Language Modeling
Figure 3 for Evaluating Distributional Distortion in Neural Language Modeling
Figure 4 for Evaluating Distributional Distortion in Neural Language Modeling
Viaarxiv icon

Better Language Model with Hypernym Class Prediction

Add code
Mar 21, 2022
Figure 1 for Better Language Model with Hypernym Class Prediction
Figure 2 for Better Language Model with Hypernym Class Prediction
Figure 3 for Better Language Model with Hypernym Class Prediction
Figure 4 for Better Language Model with Hypernym Class Prediction
Viaarxiv icon

Combining Modular Skills in Multitask Learning

Add code
Mar 01, 2022
Figure 1 for Combining Modular Skills in Multitask Learning
Figure 2 for Combining Modular Skills in Multitask Learning
Figure 3 for Combining Modular Skills in Multitask Learning
Figure 4 for Combining Modular Skills in Multitask Learning
Viaarxiv icon

Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge

Add code
Dec 16, 2021
Figure 1 for Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Figure 2 for Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Figure 3 for Does Pre-training Induce Systematic Inference? How Masked Language Models Acquire Commonsense Knowledge
Viaarxiv icon

Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU

Add code
Sep 17, 2021
Figure 1 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 2 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 3 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Figure 4 for Self-training with Few-shot Rationalization: Teacher Explanations Aid Student in Few-shot NLU
Viaarxiv icon

The Emergence of the Shape Bias Results from Communicative Efficiency

Add code
Sep 15, 2021
Figure 1 for The Emergence of the Shape Bias Results from Communicative Efficiency
Figure 2 for The Emergence of the Shape Bias Results from Communicative Efficiency
Figure 3 for The Emergence of the Shape Bias Results from Communicative Efficiency
Figure 4 for The Emergence of the Shape Bias Results from Communicative Efficiency
Viaarxiv icon

Decomposed Mutual Information Estimation for Contrastive Representation Learning

Add code
Jun 25, 2021
Figure 1 for Decomposed Mutual Information Estimation for Contrastive Representation Learning
Figure 2 for Decomposed Mutual Information Estimation for Contrastive Representation Learning
Figure 3 for Decomposed Mutual Information Estimation for Contrastive Representation Learning
Figure 4 for Decomposed Mutual Information Estimation for Contrastive Representation Learning
Viaarxiv icon

Understanding by Understanding Not: Modeling Negation in Language Models

Add code
May 07, 2021
Figure 1 for Understanding by Understanding Not: Modeling Negation in Language Models
Figure 2 for Understanding by Understanding Not: Modeling Negation in Language Models
Figure 3 for Understanding by Understanding Not: Modeling Negation in Language Models
Figure 4 for Understanding by Understanding Not: Modeling Negation in Language Models
Viaarxiv icon