Alert button
Picture for Yohei Oseki

Yohei Oseki

Alert button

Tree-Planted Transformers: Large Language Models with Implicit Syntactic Supervision

Add code
Bookmark button
Alert button
Feb 20, 2024
Ryo Yoshida, Taiga Someya, Yohei Oseki

Viaarxiv icon

Emergent Word Order Universals from Cognitively-Motivated Language Models

Add code
Bookmark button
Alert button
Feb 19, 2024
Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin

Viaarxiv icon

Psychometric Predictive Power of Large Language Models

Add code
Bookmark button
Alert button
Nov 13, 2023
Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin

Viaarxiv icon

JCoLA: Japanese Corpus of Linguistic Acceptability

Add code
Bookmark button
Alert button
Sep 22, 2023
Taiga Someya, Yushi Sugimoto, Yohei Oseki

Figure 1 for JCoLA: Japanese Corpus of Linguistic Acceptability
Figure 2 for JCoLA: Japanese Corpus of Linguistic Acceptability
Figure 3 for JCoLA: Japanese Corpus of Linguistic Acceptability
Figure 4 for JCoLA: Japanese Corpus of Linguistic Acceptability
Viaarxiv icon

Composition, Attention, or Both?

Add code
Bookmark button
Alert button
Oct 24, 2022
Ryo Yoshida, Yohei Oseki

Figure 1 for Composition, Attention, or Both?
Figure 2 for Composition, Attention, or Both?
Figure 3 for Composition, Attention, or Both?
Figure 4 for Composition, Attention, or Both?
Viaarxiv icon

Context Limitations Make Neural Language Models More Human-Like

Add code
Bookmark button
Alert button
May 23, 2022
Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui

Figure 1 for Context Limitations Make Neural Language Models More Human-Like
Figure 2 for Context Limitations Make Neural Language Models More Human-Like
Figure 3 for Context Limitations Make Neural Language Models More Human-Like
Figure 4 for Context Limitations Make Neural Language Models More Human-Like
Viaarxiv icon

Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars

Add code
Bookmark button
Alert button
Sep 10, 2021
Ryo Yoshida, Hiroshi Noji, Yohei Oseki

Figure 1 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 2 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 3 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Figure 4 for Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars
Viaarxiv icon

Lower Perplexity is Not Always Human-Like

Add code
Bookmark button
Alert button
Jun 02, 2021
Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui

Figure 1 for Lower Perplexity is Not Always Human-Like
Figure 2 for Lower Perplexity is Not Always Human-Like
Figure 3 for Lower Perplexity is Not Always Human-Like
Figure 4 for Lower Perplexity is Not Always Human-Like
Viaarxiv icon

Effective Batching for Recurrent Neural Network Grammars

Add code
Bookmark button
Alert button
May 31, 2021
Hiroshi Noji, Yohei Oseki

Figure 1 for Effective Batching for Recurrent Neural Network Grammars
Figure 2 for Effective Batching for Recurrent Neural Network Grammars
Figure 3 for Effective Batching for Recurrent Neural Network Grammars
Figure 4 for Effective Batching for Recurrent Neural Network Grammars
Viaarxiv icon