Picture for Xu Shen

Xu Shen

BlindGuard: Safeguarding LLM-based Multi-Agent Systems under Unknown Attacks

Add code
Aug 11, 2025
Viaarxiv icon

Understanding the Information Propagation Effects of Communication Topologies in LLM-based Multi-Agent Systems

Add code
May 29, 2025
Viaarxiv icon

SpecOffload: Unlocking Latent GPU Capacity for LLM Inference on Resource-Constrained Devices

Add code
May 15, 2025
Viaarxiv icon

Harnessing LLMs Explanations to Boost Surrogate Models in Tabular Data Classification

Add code
May 09, 2025
Viaarxiv icon

Latte: Transfering LLMs` Latent-level Knowledge for Few-shot Tabular Learning

Add code
May 08, 2025
Viaarxiv icon

A Comprehensive Survey of Synthetic Tabular Data Generation

Add code
Apr 23, 2025
Viaarxiv icon

Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs

Add code
Apr 15, 2025
Figure 1 for Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Figure 2 for Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Figure 3 for Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Figure 4 for Leveraging Submodule Linearity Enhances Task Arithmetic Performance in LLMs
Viaarxiv icon

Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space

Add code
Jan 26, 2025
Figure 1 for Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Figure 2 for Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Figure 3 for Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Figure 4 for Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
Viaarxiv icon

Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control

Add code
Nov 04, 2024
Figure 1 for Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
Figure 2 for Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
Figure 3 for Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
Figure 4 for Enhancing Multiple Dimensions of Trustworthiness in LLMs via Sparse Activation Control
Viaarxiv icon

From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning

Add code
Sep 03, 2024
Figure 1 for From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Figure 2 for From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Figure 3 for From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Figure 4 for From Yes-Men to Truth-Tellers: Addressing Sycophancy in Large Language Models with Pinpoint Tuning
Viaarxiv icon