Picture for Jerry Yao-Chieh Hu

Jerry Yao-Chieh Hu

Minimalist Softmax Attention Provably Learns Constrained Boolean Functions

Add code
May 26, 2025
Viaarxiv icon

Latent Variable Estimation in Bayesian Black-Litterman Models

Add code
May 04, 2025
Viaarxiv icon

Fast and Low-Cost Genomic Foundation Models via Outlier Removal

Add code
May 01, 2025
Viaarxiv icon

Attention Mechanism, Max-Affine Partition, and Universal Approximation

Add code
Apr 28, 2025
Viaarxiv icon

Universal Approximation with Softmax Attention

Add code
Apr 22, 2025
Viaarxiv icon

NdLinear Is All You Need for Representation Learning

Add code
Mar 21, 2025
Viaarxiv icon

AlignAb: Pareto-Optimal Energy Alignment for Designing Nature-Like Antibodies

Add code
Dec 30, 2024
Viaarxiv icon

On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality

Add code
Nov 26, 2024
Figure 1 for On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality
Figure 2 for On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality
Figure 3 for On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality
Figure 4 for On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality
Viaarxiv icon

Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency

Add code
Nov 25, 2024
Viaarxiv icon

Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training

Add code
Nov 25, 2024
Figure 1 for Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training
Figure 2 for Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training
Figure 3 for Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training
Viaarxiv icon