Picture for Robin Jia

Robin Jia

When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models

Add code
Jun 19, 2024
Viaarxiv icon

Pre-trained Large Language Models Use Fourier Features to Compute Addition

Add code
Jun 05, 2024
Viaarxiv icon

Language Models can Infer Action Semantics for Classical Planners from Environment Feedback

Add code
Jun 04, 2024
Viaarxiv icon

IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations

Add code
Apr 02, 2024
Figure 1 for IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
Figure 2 for IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
Figure 3 for IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
Figure 4 for IsoBench: Benchmarking Multimodal Foundation Models on Isomorphic Representations
Viaarxiv icon

Proving membership in LLM pretraining data via data watermarks

Add code
Feb 16, 2024
Viaarxiv icon

Does VLN Pretraining Work with Nonsensical or Irrelevant Instructions?

Add code
Dec 02, 2023
Viaarxiv icon

Efficient End-to-End Visual Document Understanding with Rationale Distillation

Add code
Nov 16, 2023
Viaarxiv icon

Do Localization Methods Actually Localize Memorized Data in LLMs?

Add code
Nov 15, 2023
Viaarxiv icon

Transformers Learn Higher-Order Optimization Methods for In-Context Learning: A Study with Linear Models

Add code
Oct 26, 2023
Viaarxiv icon

Estimating Large Language Model Capabilities without Labeled Test Data

Add code
May 24, 2023
Figure 1 for Estimating Large Language Model Capabilities without Labeled Test Data
Figure 2 for Estimating Large Language Model Capabilities without Labeled Test Data
Figure 3 for Estimating Large Language Model Capabilities without Labeled Test Data
Figure 4 for Estimating Large Language Model Capabilities without Labeled Test Data
Viaarxiv icon