Alert button
Picture for Yi Lu

Yi Lu

Alert button

LongAgent: Scaling Language Models to 128k Context through Multi-Agent Collaboration

Feb 18, 2024
Jun Zhao, Can Zu, Hao Xu, Yi Lu, Wei He, Yiwen Ding, Tao Gui, Qi Zhang, Xuanjing Huang

Viaarxiv icon

LongHeads: Multi-Head Attention is Secretly a Long Context Processor

Feb 16, 2024
Yi Lu, Xin Zhou, Wei He, Jun Zhao, Tao Ji, Tao Gui, Qi Zhang, Xuanjing Huang

Viaarxiv icon

LKCA: Large Kernel Convolutional Attention

Jan 11, 2024
Chenghao Li, Boheng Zeng, Yi Lu, Pengbo Shi, Qingzi Chen, Jirui Liu, Lingyun Zhu

Viaarxiv icon

Making Harmful Behaviors Unlearnable for Large Language Models

Nov 02, 2023
Xin Zhou, Yi Lu, Ruotian Ma, Tao Gui, Qi Zhang, Xuanjing Huang

Figure 1 for Making Harmful Behaviors Unlearnable for Large Language Models
Figure 2 for Making Harmful Behaviors Unlearnable for Large Language Models
Figure 3 for Making Harmful Behaviors Unlearnable for Large Language Models
Figure 4 for Making Harmful Behaviors Unlearnable for Large Language Models
Viaarxiv icon

Can 5G NR Sidelink communications support wireless augmented reality?

Oct 03, 2023
Ashutosh Srivastava, Qing Zhao, Yi Lu, Ping Wang, Qi Qu, Zhu Ji, Yee Sin Chan, Shivendra S. Panwar

Figure 1 for Can 5G NR Sidelink communications support wireless augmented reality?
Figure 2 for Can 5G NR Sidelink communications support wireless augmented reality?
Figure 3 for Can 5G NR Sidelink communications support wireless augmented reality?
Figure 4 for Can 5G NR Sidelink communications support wireless augmented reality?
Viaarxiv icon

Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection

Feb 01, 2023
Chenglong Wang, Yi Lu, Yongyu Mu, Yimin Hu, Tong Xiao, Jingbo Zhu

Figure 1 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 2 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 3 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Figure 4 for Improved Knowledge Distillation for Pre-trained Language Models via Knowledge Selection
Viaarxiv icon

Joint RIS Calibration and Multi-User Positioning

Dec 08, 2022
Yi Lu, Hui Chen, Jukka Talvitie, Henk Wymeersch, Mikko Valkama

Figure 1 for Joint RIS Calibration and Multi-User Positioning
Figure 2 for Joint RIS Calibration and Multi-User Positioning
Figure 3 for Joint RIS Calibration and Multi-User Positioning
Figure 4 for Joint RIS Calibration and Multi-User Positioning
Viaarxiv icon

Contextualized Generative Retrieval

Oct 07, 2022
Hyunji Lee, Jaeyoung Kim, Hoyeon Chang, Hanseok Oh, Sohee Yang, Vlad Karpukhin, Yi Lu, Minjoon Seo

Figure 1 for Contextualized Generative Retrieval
Figure 2 for Contextualized Generative Retrieval
Figure 3 for Contextualized Generative Retrieval
Figure 4 for Contextualized Generative Retrieval
Viaarxiv icon

DePS: An improved deep learning model for de novo peptide sequencing

Mar 16, 2022
Cheng Ge, Yi Lu, Jia Qu, Liangxu Xie, Feng Wang, Hong Zhang, Ren Kong, Shan Chang

Figure 1 for DePS: An improved deep learning model for de novo peptide sequencing
Figure 2 for DePS: An improved deep learning model for de novo peptide sequencing
Figure 3 for DePS: An improved deep learning model for de novo peptide sequencing
Figure 4 for DePS: An improved deep learning model for de novo peptide sequencing
Viaarxiv icon