Alert button
Picture for Hosein Mohebbi

Hosein Mohebbi

Alert button

Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers

Add code
Bookmark button
Alert button
Oct 15, 2023
Hosein Mohebbi, Grzegorz Chrupała, Willem Zuidema, Afra Alishahi

Viaarxiv icon

DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers

Add code
Bookmark button
Alert button
Oct 05, 2023
Anna Langedijk, Hosein Mohebbi, Gabriele Sarti, Willem Zuidema, Jaap Jumelet

Figure 1 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 2 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 3 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Figure 4 for DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers
Viaarxiv icon

Quantifying Context Mixing in Transformers

Add code
Bookmark button
Alert button
Feb 08, 2023
Hosein Mohebbi, Willem Zuidema, Grzegorz Chrupała, Afra Alishahi

Figure 1 for Quantifying Context Mixing in Transformers
Figure 2 for Quantifying Context Mixing in Transformers
Figure 3 for Quantifying Context Mixing in Transformers
Figure 4 for Quantifying Context Mixing in Transformers
Viaarxiv icon

AdapLeR: Speeding up Inference by Adaptive Length Reduction

Add code
Bookmark button
Alert button
Mar 16, 2022
Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Figure 1 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 2 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 3 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Figure 4 for AdapLeR: Speeding up Inference by Adaptive Length Reduction
Viaarxiv icon

Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations

Add code
Bookmark button
Alert button
Sep 15, 2021
Mohsen Fayyaz, Ehsan Aghazadeh, Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar

Figure 1 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 2 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 3 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Figure 4 for Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToids' Representations
Viaarxiv icon

Exploring the Role of BERT Token Representations to Explain Sentence Probing Results

Add code
Bookmark button
Alert button
Apr 03, 2021
Hosein Mohebbi, Ali Modarressi, Mohammad Taher Pilehvar

Figure 1 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 2 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 3 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Figure 4 for Exploring the Role of BERT Token Representations to Explain Sentence Probing Results
Viaarxiv icon