Alert button
Picture for Michael Hanna

Michael Hanna

Alert button

Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms

Add code
Bookmark button
Alert button
Mar 26, 2024
Michael Hanna, Sandro Pezzelle, Yonatan Belinkov

Figure 1 for Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms
Figure 2 for Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms
Figure 3 for Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms
Figure 4 for Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms
Viaarxiv icon

Do Pre-Trained Language Models Detect and Understand Semantic Underspecification? Ask the DUST!

Add code
Bookmark button
Alert button
Feb 19, 2024
Frank Wildenburg, Michael Hanna, Sandro Pezzelle

Viaarxiv icon

When Language Models Fall in Love: Animacy Processing in Transformer Language Models

Add code
Bookmark button
Alert button
Oct 23, 2023
Michael Hanna, Yonatan Belinkov, Sandro Pezzelle

Figure 1 for When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Figure 2 for When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Figure 3 for When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Figure 4 for When Language Models Fall in Love: Animacy Processing in Transformer Language Models
Viaarxiv icon

Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model

Add code
Bookmark button
Alert button
Oct 19, 2023
Abhijith Chintam, Rahel Beloch, Willem Zuidema, Michael Hanna, Oskar van der Wal

Viaarxiv icon

ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation

Add code
Bookmark button
Alert button
Oct 17, 2023
Jaap Jumelet, Michael Hanna, Marianne de Heer Kloots, Anna Langedijk, Charlotte Pouw, Oskar van der Wal

Figure 1 for ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
Figure 2 for ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
Figure 3 for ChapGTP, ILLC's Attempt at Raising a BabyLM: Improving Data Efficiency by Automatic Task Formation
Viaarxiv icon

How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model

Add code
Bookmark button
Alert button
Apr 30, 2023
Michael Hanna, Ollie Liu, Alexandre Variengien

Figure 1 for How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Figure 2 for How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Figure 3 for How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Figure 4 for How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Viaarxiv icon