Picture for Willem Zuidema

Willem Zuidema

Transformer See, Transformer Do: Copying as an Intermediate Step in Learning Analogical Reasoning

Add code
Apr 07, 2026
Viaarxiv icon

In-Context Learning in Speech Language Models: Analyzing the Role of Acoustic Features, Linguistic Structure, and Induction Heads

Add code
Apr 07, 2026
Viaarxiv icon

Tracking the emergence of linguistic structure in self-supervised models learning from speech

Add code
Apr 02, 2026
Viaarxiv icon

Linguists should learn to love speech-based deep learning models

Add code
Dec 16, 2025
Viaarxiv icon

The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders

Add code
Sep 19, 2025
Figure 1 for The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders
Figure 2 for The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders
Figure 3 for The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders
Figure 4 for The Curious Case of Visual Grounding: Different Effects for Speech- and Text-based Language Encoders
Viaarxiv icon

Propositional Logic for Probing Generalization in Neural Networks

Add code
Jun 10, 2025
Figure 1 for Propositional Logic for Probing Generalization in Neural Networks
Figure 2 for Propositional Logic for Probing Generalization in Neural Networks
Figure 3 for Propositional Logic for Probing Generalization in Neural Networks
Figure 4 for Propositional Logic for Probing Generalization in Neural Networks
Viaarxiv icon

A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity

Add code
May 28, 2025
Figure 1 for A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity
Figure 2 for A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity
Figure 3 for A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity
Figure 4 for A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity
Viaarxiv icon

PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs

Add code
Mar 12, 2025
Figure 1 for PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs
Figure 2 for PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs
Figure 3 for PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs
Figure 4 for PolyPythias: Stability and Outliers across Fifty Language Model Pre-Training Runs
Viaarxiv icon

Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework

Add code
Oct 08, 2024
Figure 1 for Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
Figure 2 for Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
Figure 3 for Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
Figure 4 for Enforcing Interpretability in Time Series Transformers: A Concept Bottleneck Framework
Viaarxiv icon

Disentangling Textual and Acoustic Features of Neural Speech Representations

Add code
Oct 03, 2024
Figure 1 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 2 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 3 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Figure 4 for Disentangling Textual and Acoustic Features of Neural Speech Representations
Viaarxiv icon