Picture for Chaojun Xiao

Chaojun Xiao

The Elephant in the Room: Rethinking the Usage of Pre-trained Language Model in Sequential Recommendation

Add code
Apr 12, 2024
Viaarxiv icon

Robust and Scalable Model Editing for Large Language Models

Add code
Mar 26, 2024
Figure 1 for Robust and Scalable Model Editing for Large Language Models
Figure 2 for Robust and Scalable Model Editing for Large Language Models
Figure 3 for Robust and Scalable Model Editing for Large Language Models
Figure 4 for Robust and Scalable Model Editing for Large Language Models
Viaarxiv icon

Ouroboros: Speculative Decoding with Large Model Enhanced Drafting

Add code
Feb 21, 2024
Viaarxiv icon

InfLLM: Unveiling the Intrinsic Capacity of LLMs for Understanding Extremely Long Sequences with Training-Free Memory

Add code
Feb 07, 2024
Viaarxiv icon

ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse LLMs

Add code
Feb 06, 2024
Viaarxiv icon

MUSER: A Multi-View Similar Case Retrieval Dataset

Add code
Oct 24, 2023
Figure 1 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 2 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 3 for MUSER: A Multi-View Similar Case Retrieval Dataset
Figure 4 for MUSER: A Multi-View Similar Case Retrieval Dataset
Viaarxiv icon

Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules

Add code
Oct 24, 2023
Figure 1 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 2 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 3 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Figure 4 for Variator: Accelerating Pre-trained Models with Plug-and-Play Compression Modules
Viaarxiv icon

Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language

Add code
Oct 20, 2023
Figure 1 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 2 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 3 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Figure 4 for Thoroughly Modeling Multi-domain Pre-trained Recommendation as Language
Viaarxiv icon

Plug-and-Play Knowledge Injection for Pre-trained Language Models

Add code
May 28, 2023
Figure 1 for Plug-and-Play Knowledge Injection for Pre-trained Language Models
Figure 2 for Plug-and-Play Knowledge Injection for Pre-trained Language Models
Figure 3 for Plug-and-Play Knowledge Injection for Pre-trained Language Models
Figure 4 for Plug-and-Play Knowledge Injection for Pre-trained Language Models
Viaarxiv icon

Emergent Modularity in Pre-trained Transformers

Add code
May 28, 2023
Figure 1 for Emergent Modularity in Pre-trained Transformers
Figure 2 for Emergent Modularity in Pre-trained Transformers
Figure 3 for Emergent Modularity in Pre-trained Transformers
Figure 4 for Emergent Modularity in Pre-trained Transformers
Viaarxiv icon