Picture for Yohei Oseki

Yohei Oseki

Do LLMs Need to Think in One Language? Correlation between Latent Language and Task Performance

Add code
May 27, 2025
Viaarxiv icon

Rethinking the Relationship between the Power Law and Hierarchical Structures

Add code
May 08, 2025
Viaarxiv icon

How LLMs Learn: Tracing Internal Representations with Sparse Autoencoders

Add code
Mar 09, 2025
Viaarxiv icon

Can Language Models Learn Typologically Implausible Languages?

Add code
Feb 17, 2025
Viaarxiv icon

If Attention Serves as a Cognitive Model of Human Memory Retrieval, What is the Plausible Memory Representation?

Add code
Feb 17, 2025
Viaarxiv icon

Developmentally-plausible Working Memory Shapes a Critical Period for Language Acquisition

Add code
Feb 07, 2025
Viaarxiv icon

Large Language Models Are Human-Like Internally

Add code
Feb 03, 2025
Figure 1 for Large Language Models Are Human-Like Internally
Figure 2 for Large Language Models Are Human-Like Internally
Figure 3 for Large Language Models Are Human-Like Internally
Figure 4 for Large Language Models Are Human-Like Internally
Viaarxiv icon

BabyLM Challenge: Exploring the Effect of Variation Sets on Language Model Training Efficiency

Add code
Nov 14, 2024
Viaarxiv icon

Is Structure Dependence Shaped for Efficient Communication?: A Case Study on Coordination

Add code
Oct 14, 2024
Viaarxiv icon

Can Language Models Induce Grammatical Knowledge from Indirect Evidence?

Add code
Oct 08, 2024
Viaarxiv icon