Alert button
Picture for Harish Tayyar Madabushi

Harish Tayyar Madabushi

Alert button

Pre-Trained Language Models Represent Some Geographic Populations Better Than Others

Mar 16, 2024
Jonathan Dunn, Benjamin Adams, Harish Tayyar Madabushi

Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Mar 07, 2024
Frances A. Laureano De Leon, Harish Tayyar Madabushi, Mark Lee

Figure 1 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 2 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 3 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Figure 4 for Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text
Viaarxiv icon

Standardize: Aligning Language Models with Expert-Defined Standards for Content Generation

Feb 19, 2024
Joseph Marvin Imperial, Gail Forey, Harish Tayyar Madabushi

Viaarxiv icon

Word Boundary Information Isn't Useful for Encoder Language Models

Jan 15, 2024
Edward Gow-Smith, Dylan Phelps, Harish Tayyar Madabushi, Carolina Scarton, Aline Villavicencio

Viaarxiv icon

Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models

Sep 11, 2023
Joseph Marvin Imperial, Harish Tayyar Madabushi

Figure 1 for Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models
Figure 2 for Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models
Figure 3 for Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models
Figure 4 for Flesch or Fumble? Evaluating Readability Standard Alignment of Instruction-Tuned Language Models
Viaarxiv icon

Construction Grammar and Language Models

Sep 04, 2023
Harish Tayyar Madabushi, Laurence Romain, Petar Milin, Dagmar Divjak

Figure 1 for Construction Grammar and Language Models
Viaarxiv icon

Are Emergent Abilities in Large Language Models just In-Context Learning?

Sep 04, 2023
Sheng Lu, Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, Iryna Gurevych

Figure 1 for Are Emergent Abilities in Large Language Models just In-Context Learning?
Figure 2 for Are Emergent Abilities in Large Language Models just In-Context Learning?
Figure 3 for Are Emergent Abilities in Large Language Models just In-Context Learning?
Figure 4 for Are Emergent Abilities in Large Language Models just In-Context Learning?
Viaarxiv icon

Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5

Oct 31, 2022
Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, Aline Villavicencio, Iryna Gurevych

Figure 1 for Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Figure 2 for Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Figure 3 for Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Figure 4 for Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Viaarxiv icon

Abstraction not Memory: BERT and the English Article System

Jun 08, 2022
Harish Tayyar Madabushi, Dagmar Divjak, Petar Milin

Figure 1 for Abstraction not Memory: BERT and the English Article System
Figure 2 for Abstraction not Memory: BERT and the English Article System
Figure 3 for Abstraction not Memory: BERT and the English Article System
Figure 4 for Abstraction not Memory: BERT and the English Article System
Viaarxiv icon