Picture for Ryan Cotterell

Ryan Cotterell

ETH Zurich

Efficient Sampling of Dependency Structures

Add code
Sep 14, 2021
Figure 1 for Efficient Sampling of Dependency Structures
Figure 2 for Efficient Sampling of Dependency Structures
Figure 3 for Efficient Sampling of Dependency Structures
Figure 4 for Efficient Sampling of Dependency Structures
Viaarxiv icon

A Bayesian Framework for Information-Theoretic Probing

Add code
Sep 08, 2021
Figure 1 for A Bayesian Framework for Information-Theoretic Probing
Figure 2 for A Bayesian Framework for Information-Theoretic Probing
Figure 3 for A Bayesian Framework for Information-Theoretic Probing
Figure 4 for A Bayesian Framework for Information-Theoretic Probing
Viaarxiv icon

Differentiable Subset Pruning of Transformer Heads

Add code
Aug 22, 2021
Figure 1 for Differentiable Subset Pruning of Transformer Heads
Figure 2 for Differentiable Subset Pruning of Transformer Heads
Figure 3 for Differentiable Subset Pruning of Transformer Heads
Figure 4 for Differentiable Subset Pruning of Transformer Heads
Viaarxiv icon

Towards Zero-shot Language Modeling

Add code
Aug 06, 2021
Figure 1 for Towards Zero-shot Language Modeling
Figure 2 for Towards Zero-shot Language Modeling
Figure 3 for Towards Zero-shot Language Modeling
Figure 4 for Towards Zero-shot Language Modeling
Viaarxiv icon

Determinantal Beam Search

Add code
Jun 21, 2021
Figure 1 for Determinantal Beam Search
Figure 2 for Determinantal Beam Search
Figure 3 for Determinantal Beam Search
Figure 4 for Determinantal Beam Search
Viaarxiv icon

A Cognitive Regularizer for Language Modeling

Add code
Jun 10, 2021
Figure 1 for A Cognitive Regularizer for Language Modeling
Figure 2 for A Cognitive Regularizer for Language Modeling
Figure 3 for A Cognitive Regularizer for Language Modeling
Figure 4 for A Cognitive Regularizer for Language Modeling
Viaarxiv icon

Is Sparse Attention more Interpretable?

Add code
Jun 08, 2021
Figure 1 for Is Sparse Attention more Interpretable?
Figure 2 for Is Sparse Attention more Interpretable?
Figure 3 for Is Sparse Attention more Interpretable?
Figure 4 for Is Sparse Attention more Interpretable?
Viaarxiv icon

SIGTYP 2021 Shared Task: Robust Spoken Language Identification

Add code
Jun 07, 2021
Figure 1 for SIGTYP 2021 Shared Task: Robust Spoken Language Identification
Figure 2 for SIGTYP 2021 Shared Task: Robust Spoken Language Identification
Figure 3 for SIGTYP 2021 Shared Task: Robust Spoken Language Identification
Figure 4 for SIGTYP 2021 Shared Task: Robust Spoken Language Identification
Viaarxiv icon

Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing

Add code
Jun 04, 2021
Figure 1 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 2 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 3 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Figure 4 for Do Syntactic Probes Probe Syntax? Experiments with Jabberwocky Probing
Viaarxiv icon

Modeling the Unigram Distribution

Add code
Jun 04, 2021
Figure 1 for Modeling the Unigram Distribution
Figure 2 for Modeling the Unigram Distribution
Figure 3 for Modeling the Unigram Distribution
Figure 4 for Modeling the Unigram Distribution
Viaarxiv icon