Picture for Ryumei Nakada

Ryumei Nakada

Residual Feature Integration is Sufficient to Prevent Negative Transfer

Add code
May 17, 2025
Viaarxiv icon

A Theoretical Framework for Prompt Engineering: Approximating Smooth Functions with Transformer Prompts

Add code
Mar 26, 2025
Viaarxiv icon

S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity

Add code
Dec 10, 2024
Figure 1 for S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
Figure 2 for S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
Figure 3 for S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
Figure 4 for S$^{2}$FT: Efficient, Scalable and Generalizable LLM Fine-tuning by Structured Sparsity
Viaarxiv icon

NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models

Add code
Oct 02, 2024
Figure 1 for NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Figure 2 for NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Figure 3 for NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Figure 4 for NEAT: Nonlinear Parameter-efficient Adaptation of Pre-trained Models
Viaarxiv icon

Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data Imbalance

Add code
Jun 05, 2024
Viaarxiv icon

Contrastive Learning on Multimodal Analysis of Electronic Health Records

Add code
Mar 22, 2024
Figure 1 for Contrastive Learning on Multimodal Analysis of Electronic Health Records
Figure 2 for Contrastive Learning on Multimodal Analysis of Electronic Health Records
Figure 3 for Contrastive Learning on Multimodal Analysis of Electronic Health Records
Figure 4 for Contrastive Learning on Multimodal Analysis of Electronic Health Records
Viaarxiv icon

Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training

Add code
Jun 13, 2023
Figure 1 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 2 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 3 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Figure 4 for Safeguarding Data in Multimodal AI: A Differentially Private Approach to CLIP Training
Viaarxiv icon

Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data

Add code
Feb 23, 2023
Figure 1 for Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data
Figure 2 for Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data
Figure 3 for Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data
Figure 4 for Understanding Multimodal Contrastive Learning and Incorporating Unpaired Data
Viaarxiv icon

The Power of Contrast for Feature Learning: A Theoretical Analysis

Add code
Oct 06, 2021
Figure 1 for The Power of Contrast for Feature Learning: A Theoretical Analysis
Figure 2 for The Power of Contrast for Feature Learning: A Theoretical Analysis
Figure 3 for The Power of Contrast for Feature Learning: A Theoretical Analysis
Figure 4 for The Power of Contrast for Feature Learning: A Theoretical Analysis
Viaarxiv icon

Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks

Add code
Mar 15, 2021
Figure 1 for Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Figure 2 for Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Figure 3 for Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Figure 4 for Asymptotic Risk of Overparameterized Likelihood Models: Double Descent Theory for Deep Neural Networks
Viaarxiv icon