Picture for Jie Ren

Jie Ren

MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model

Add code
Apr 02, 2024
Figure 1 for MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model
Figure 2 for MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model
Figure 3 for MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model
Figure 4 for MLDT: Multi-Level Decomposition for Complex Long-Horizon Robotic Task Planning with Open-Source Large Language Model
Viaarxiv icon

Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention

Add code
Mar 17, 2024
Figure 1 for Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention
Figure 2 for Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention
Figure 3 for Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention
Figure 4 for Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention
Viaarxiv icon

The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation

Add code
Feb 23, 2024
Figure 1 for The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation
Figure 2 for The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation
Figure 3 for The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation
Figure 4 for The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented Generation
Viaarxiv icon

Identifying Semantic Induction Heads to Understand In-Context Learning

Add code
Feb 20, 2024
Figure 1 for Identifying Semantic Induction Heads to Understand In-Context Learning
Figure 2 for Identifying Semantic Induction Heads to Understand In-Context Learning
Figure 3 for Identifying Semantic Induction Heads to Understand In-Context Learning
Figure 4 for Identifying Semantic Induction Heads to Understand In-Context Learning
Viaarxiv icon

A novel spatial-frequency domain network for zero-shot incremental learning

Add code
Feb 11, 2024
Figure 1 for A novel spatial-frequency domain network for zero-shot incremental learning
Figure 2 for A novel spatial-frequency domain network for zero-shot incremental learning
Figure 3 for A novel spatial-frequency domain network for zero-shot incremental learning
Figure 4 for A novel spatial-frequency domain network for zero-shot incremental learning
Viaarxiv icon

Copyright Protection in Generative AI: A Technical Perspective

Add code
Feb 04, 2024
Figure 1 for Copyright Protection in Generative AI: A Technical Perspective
Figure 2 for Copyright Protection in Generative AI: A Technical Perspective
Figure 3 for Copyright Protection in Generative AI: A Technical Perspective
Figure 4 for Copyright Protection in Generative AI: A Technical Perspective
Viaarxiv icon

Superiority of Multi-Head Attention in In-Context Linear Regression

Add code
Jan 30, 2024
Viaarxiv icon

Self-Evaluation Improves Selective Generation in Large Language Models

Add code
Dec 14, 2023
Figure 1 for Self-Evaluation Improves Selective Generation in Large Language Models
Figure 2 for Self-Evaluation Improves Selective Generation in Large Language Models
Figure 3 for Self-Evaluation Improves Selective Generation in Large Language Models
Figure 4 for Self-Evaluation Improves Selective Generation in Large Language Models
Viaarxiv icon

Universal Self-Consistency for Large Language Model Generation

Add code
Nov 29, 2023
Figure 1 for Universal Self-Consistency for Large Language Model Generation
Figure 2 for Universal Self-Consistency for Large Language Model Generation
Figure 3 for Universal Self-Consistency for Large Language Model Generation
Figure 4 for Universal Self-Consistency for Large Language Model Generation
Viaarxiv icon

Towards End-to-end 4-Bit Inference on Generative Large Language Models

Add code
Oct 13, 2023
Figure 1 for Towards End-to-end 4-Bit Inference on Generative Large Language Models
Figure 2 for Towards End-to-end 4-Bit Inference on Generative Large Language Models
Figure 3 for Towards End-to-end 4-Bit Inference on Generative Large Language Models
Figure 4 for Towards End-to-end 4-Bit Inference on Generative Large Language Models
Viaarxiv icon