Picture for Jinfeng Rao

Jinfeng Rao

PinCLIP: Large-scale Foundational Multimodal Representation at Pinterest

Add code
Mar 03, 2026
Viaarxiv icon

Generative Engine Optimization: A VLM and Agent Framework for Pinterest Acquisition Growth

Add code
Feb 03, 2026
Viaarxiv icon

Improving Pinterest Search Relevance Using Large Language Models

Add code
Oct 22, 2024
Figure 1 for Improving Pinterest Search Relevance Using Large Language Models
Figure 2 for Improving Pinterest Search Relevance Using Large Language Models
Figure 3 for Improving Pinterest Search Relevance Using Large Language Models
Figure 4 for Improving Pinterest Search Relevance Using Large Language Models
Viaarxiv icon

DSI++: Updating Transformer Memory with New Documents

Add code
Dec 19, 2022
Figure 1 for DSI++: Updating Transformer Memory with New Documents
Figure 2 for DSI++: Updating Transformer Memory with New Documents
Figure 3 for DSI++: Updating Transformer Memory with New Documents
Figure 4 for DSI++: Updating Transformer Memory with New Documents
Viaarxiv icon

Transcending Scaling Laws with 0.1% Extra Compute

Add code
Oct 20, 2022
Figure 1 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 2 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 3 for Transcending Scaling Laws with 0.1% Extra Compute
Figure 4 for Transcending Scaling Laws with 0.1% Extra Compute
Viaarxiv icon

Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?

Add code
Jul 21, 2022
Figure 1 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 2 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 3 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Figure 4 for Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?
Viaarxiv icon

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

Add code
Nov 22, 2021
Figure 1 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 2 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 3 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 4 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Viaarxiv icon

Improving Compositional Generalization with Self-Training for Data-to-Text Generation

Add code
Oct 16, 2021
Figure 1 for Improving Compositional Generalization with Self-Training for Data-to-Text Generation
Figure 2 for Improving Compositional Generalization with Self-Training for Data-to-Text Generation
Figure 3 for Improving Compositional Generalization with Self-Training for Data-to-Text Generation
Figure 4 for Improving Compositional Generalization with Self-Training for Data-to-Text Generation
Viaarxiv icon

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Add code
Sep 22, 2021
Figure 1 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 2 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 3 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 4 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Viaarxiv icon

Long Range Arena: A Benchmark for Efficient Transformers

Add code
Nov 08, 2020
Figure 1 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 2 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 3 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 4 for Long Range Arena: A Benchmark for Efficient Transformers
Viaarxiv icon