Picture for Issei Sato

Issei Sato

The University of Tokyo

Can Test-time Computation Mitigate Memorization Bias in Neural Symbolic Regression?

Add code
May 28, 2025
Viaarxiv icon

To CoT or To Loop? A Formal Comparison Between Chain-of-Thought and Looped Transformers

Add code
May 25, 2025
Viaarxiv icon

FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing

Add code
Feb 06, 2025
Viaarxiv icon

Understanding Generalization in Physics Informed Models through Affine Variety Dimensions

Add code
Jan 31, 2025
Viaarxiv icon

Understanding Knowledge Hijack Mechanism in In-context Learning through Associative Memory

Add code
Dec 16, 2024
Viaarxiv icon

Theoretical Analysis of Hierarchical Language Recognition and Generation by Transformers without Positional Encoding

Add code
Oct 16, 2024
Viaarxiv icon

On Expressive Power of Looped Transformers: Theoretical Analysis and Enhancement via Timestep Encoding

Add code
Oct 02, 2024
Viaarxiv icon

Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism

Add code
Sep 26, 2024
Viaarxiv icon

Optimal Memorization Capacity of Transformers

Add code
Sep 26, 2024
Figure 1 for Optimal Memorization Capacity of Transformers
Viaarxiv icon

Multiplicative Logit Adjustment Approximates Neural-Collapse-Aware Decision Boundary Adjustment

Add code
Sep 26, 2024
Viaarxiv icon