Alert button
Picture for Atsuki Yamaguchi

Atsuki Yamaguchi

Alert button

An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Generative LLM Inference

Add code
Bookmark button
Alert button
Feb 16, 2024
Atsuki Yamaguchi, Aline Villavicencio, Nikolaos Aletras

Viaarxiv icon

appjsonify: An Academic Paper PDF-to-JSON Conversion Toolkit

Add code
Bookmark button
Alert button
Oct 03, 2023
Atsuki Yamaguchi, Terufumi Morishita

Figure 1 for appjsonify: An Academic Paper PDF-to-JSON Conversion Toolkit
Viaarxiv icon

Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic

Add code
Bookmark button
Alert button
Aug 11, 2023
Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa

Figure 1 for Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
Figure 2 for Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
Figure 3 for Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
Figure 4 for Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
Viaarxiv icon

How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese

Add code
Bookmark button
Alert button
Jun 16, 2023
Takuro Fujii, Koki Shibata, Atsuki Yamaguchi, Terufumi Morishita, Yasuhiro Sogawa

Figure 1 for How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese
Figure 2 for How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese
Figure 3 for How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese
Figure 4 for How do different tokenizers perform on downstream tasks in scriptio continua languages?: A case study in Japanese
Viaarxiv icon

How does the task complexity of masked pretraining objectives affect downstream performance?

Add code
Bookmark button
Alert button
May 18, 2023
Atsuki Yamaguchi, Hiroaki Ozaki, Terufumi Morishita, Gaku Morio, Yasuhiro Sogawa

Figure 1 for How does the task complexity of masked pretraining objectives affect downstream performance?
Figure 2 for How does the task complexity of masked pretraining objectives affect downstream performance?
Figure 3 for How does the task complexity of masked pretraining objectives affect downstream performance?
Figure 4 for How does the task complexity of masked pretraining objectives affect downstream performance?
Viaarxiv icon

Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News

Add code
Bookmark button
Alert button
Mar 03, 2023
Yuta Koreeda, Ken-ichi Yokote, Hiroaki Ozaki, Atsuki Yamaguchi, Masaya Tsunokake, Yasuhiro Sogawa

Figure 1 for Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
Figure 2 for Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
Figure 3 for Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
Figure 4 for Team Hitachi at SemEval-2023 Task 3: Exploring Cross-lingual Multi-task Strategies for Genre and Framing Detection in Online News
Viaarxiv icon

Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization

Add code
Bookmark button
Alert button
Dec 06, 2021
Atsuki Yamaguchi, Gaku Morio, Hiroaki Ozaki, Ken-ichi Yokote, Kenji Nagamatsu

Figure 1 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 2 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 3 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Figure 4 for Team Hitachi @ AutoMin 2021: Reference-free Automatic Minuting Pipeline with Argument Structure Construction over Topic-based Summarization
Viaarxiv icon

Frustratingly Simple Pretraining Alternatives to Masked Language Modeling

Add code
Bookmark button
Alert button
Sep 04, 2021
Atsuki Yamaguchi, George Chrysostomou, Katerina Margatina, Nikolaos Aletras

Figure 1 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 2 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 3 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Figure 4 for Frustratingly Simple Pretraining Alternatives to Masked Language Modeling
Viaarxiv icon