Picture for Ryan Cotterell

Ryan Cotterell

ETH Zurich

Controllable Context Sensitivity and the Knob Behind It

Add code
Nov 11, 2024
Figure 1 for Controllable Context Sensitivity and the Knob Behind It
Figure 2 for Controllable Context Sensitivity and the Knob Behind It
Figure 3 for Controllable Context Sensitivity and the Knob Behind It
Figure 4 for Controllable Context Sensitivity and the Knob Behind It
Viaarxiv icon

An $\mathbf{L^*}$ Algorithm for Deterministic Weighted Regular Languages

Add code
Nov 09, 2024
Viaarxiv icon

Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form Discourse

Add code
Oct 21, 2024
Figure 1 for Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form Discourse
Figure 2 for Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form Discourse
Figure 3 for Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form Discourse
Figure 4 for Surprise! Uniform Information Density Isn't the Whole Story: Predicting Surprisal Contours in Long-form Discourse
Viaarxiv icon

Efficiently Computing Susceptibility to Context in Language Models

Add code
Oct 18, 2024
Figure 1 for Efficiently Computing Susceptibility to Context in Language Models
Figure 2 for Efficiently Computing Susceptibility to Context in Language Models
Figure 3 for Efficiently Computing Susceptibility to Context in Language Models
Figure 4 for Efficiently Computing Susceptibility to Context in Language Models
Viaarxiv icon

Reverse-Engineering the Reader

Add code
Oct 16, 2024
Viaarxiv icon

Activation Scaling for Steering and Interpreting Language Models

Add code
Oct 07, 2024
Viaarxiv icon

On the Proper Treatment of Tokenization in Psycholinguistics

Add code
Oct 03, 2024
Figure 1 for On the Proper Treatment of Tokenization in Psycholinguistics
Figure 2 for On the Proper Treatment of Tokenization in Psycholinguistics
Figure 3 for On the Proper Treatment of Tokenization in Psycholinguistics
Figure 4 for On the Proper Treatment of Tokenization in Psycholinguistics
Viaarxiv icon

Can Transformers Learn $n$-gram Language Models?

Add code
Oct 03, 2024
Figure 1 for Can Transformers Learn $n$-gram Language Models?
Figure 2 for Can Transformers Learn $n$-gram Language Models?
Figure 3 for Can Transformers Learn $n$-gram Language Models?
Figure 4 for Can Transformers Learn $n$-gram Language Models?
Viaarxiv icon

Generalized Measures of Anticipation and Responsivity in Online Language Processing

Add code
Sep 16, 2024
Figure 1 for Generalized Measures of Anticipation and Responsivity in Online Language Processing
Figure 2 for Generalized Measures of Anticipation and Responsivity in Online Language Processing
Figure 3 for Generalized Measures of Anticipation and Responsivity in Online Language Processing
Figure 4 for Generalized Measures of Anticipation and Responsivity in Online Language Processing
Viaarxiv icon

On the Role of Context in Reading Time Prediction

Add code
Sep 12, 2024
Figure 1 for On the Role of Context in Reading Time Prediction
Figure 2 for On the Role of Context in Reading Time Prediction
Figure 3 for On the Role of Context in Reading Time Prediction
Figure 4 for On the Role of Context in Reading Time Prediction
Viaarxiv icon