Picture for Takeshi Kojima

Takeshi Kojima

Topology of Reasoning: Understanding Large Reasoning Models through Reasoning Graph Properties

Add code
Jun 06, 2025
Viaarxiv icon

Inconsistent Tokenizations Cause Language Models to be Perplexed by Japanese Grammar

Add code
May 26, 2025
Viaarxiv icon

A Comprehensive Survey on Physical Risk Control in the Era of Foundation Model-enabled Robotics

Add code
May 19, 2025
Viaarxiv icon

Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?

Add code
Oct 09, 2024
Figure 1 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 2 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 3 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Figure 4 for Which Programming Language and What Features at Pre-training Stage Affect Downstream Logical Inference Performance?
Viaarxiv icon

Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

Add code
Oct 01, 2024
Figure 1 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 2 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 3 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Figure 4 for Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning
Viaarxiv icon

On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons

Add code
Apr 03, 2024
Figure 1 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 2 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 3 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Figure 4 for On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons
Viaarxiv icon

Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text

Add code
Nov 30, 2023
Figure 1 for Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text
Figure 2 for Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text
Figure 3 for Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text
Figure 4 for Unnatural Error Correction: GPT-4 Can Almost Perfectly Handle Unnatural Scrambled Text
Viaarxiv icon

Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment

Add code
Jun 28, 2022
Figure 1 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 2 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 3 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Figure 4 for Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment
Viaarxiv icon

Large Language Models are Zero-Shot Reasoners

Add code
May 24, 2022
Figure 1 for Large Language Models are Zero-Shot Reasoners
Figure 2 for Large Language Models are Zero-Shot Reasoners
Figure 3 for Large Language Models are Zero-Shot Reasoners
Figure 4 for Large Language Models are Zero-Shot Reasoners
Viaarxiv icon