Picture for Harish Tayyar Madabushi

Harish Tayyar Madabushi

Neither Stochastic Parroting nor AGI: LLMs Solve Tasks through Context-Directed Extrapolation from Training Data Priors

Add code
May 29, 2025
Viaarxiv icon

Illusion or Algorithm? Investigating Memorization, Emergence, and Symbolic Processing in In-Context Learning

Add code
May 16, 2025
Viaarxiv icon

Adapting Whisper for Regional Dialects: Enhancing Public Services for Vulnerable Populations in the United Kingdom

Add code
Jan 15, 2025
Viaarxiv icon

The Inherent Limits of Pretrained LLMs: The Unexpected Convergence of Instruction Tuning and In-Context Learning Capabilities

Add code
Jan 15, 2025
Viaarxiv icon

Assessing Language Comprehension in Large Language Models Using Construction Grammar

Add code
Jan 08, 2025
Figure 1 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 2 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 3 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Figure 4 for Assessing Language Comprehension in Large Language Models Using Construction Grammar
Viaarxiv icon

SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning

Add code
Jul 18, 2024
Figure 1 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 2 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 3 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Figure 4 for SpeciaLex: A Benchmark for In-Context Specialized Lexicon Learning
Viaarxiv icon

Fine-Tuning with Divergent Chains of Thought Boosts Reasoning Through Self-Correction in Language Models

Add code
Jul 03, 2024
Viaarxiv icon

FS-RAG: A Frame Semantics Based Approach for Improved Factual Accuracy in Large Language Models

Add code
Jun 23, 2024
Viaarxiv icon

Pre-Trained Language Models Represent Some Geographic Populations Better Than Others

Add code
Mar 16, 2024
Figure 1 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 2 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 3 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Figure 4 for Pre-Trained Language Models Represent Some Geographic Populations Better Than Others
Viaarxiv icon

Code-Mixed Probes Show How Pre-Trained Models Generalise On Code-Switched Text

Add code
Mar 07, 2024
Viaarxiv icon