Picture for Michael Hahn

Michael Hahn

Saarland University

One Size Fits None: Rethinking Fairness in Medical AI

Add code
Jun 17, 2025
Viaarxiv icon

Position: Pause Recycling LoRAs and Prioritize Mechanisms to Uncover Limits and Effectiveness

Add code
Jun 16, 2025
Viaarxiv icon

Born a Transformer -- Always a Transformer?

Add code
May 27, 2025
Viaarxiv icon

Language models can learn implicit multi-hop reasoning, but only if they have lots of training data

Add code
May 23, 2025
Viaarxiv icon

Contextualize-then-Aggregate: Circuits for In-Context Learning in Gemma-2 2B

Add code
Mar 31, 2025
Viaarxiv icon

Lower Bounds for Chain-of-Thought Reasoning in Hard-Attention Transformers

Add code
Feb 04, 2025
Viaarxiv icon

Emergent Stack Representations in Modeling Counter Languages Using Transformers

Add code
Feb 03, 2025
Viaarxiv icon

A Formal Framework for Understanding Length Generalization in Transformers

Add code
Oct 03, 2024
Figure 1 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 2 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 3 for A Formal Framework for Understanding Length Generalization in Transformers
Figure 4 for A Formal Framework for Understanding Length Generalization in Transformers
Viaarxiv icon

Separations in the Representational Capabilities of Transformers and Recurrent Architectures

Add code
Jun 13, 2024
Viaarxiv icon

The Expressive Capacity of State Space Models: A Formal Language Perspective

Add code
May 27, 2024
Viaarxiv icon