Picture for Peijie Jiang

Peijie Jiang

Optimal Expert-Attention Allocation in Mixture-of-Experts: A Scalable Law for Dynamic Model Design

Add code
Mar 11, 2026
Viaarxiv icon

VLA-Mark: A cross modal watermark for large vision-language alignment model

Add code
Jul 18, 2025
Figure 1 for VLA-Mark: A cross modal watermark for large vision-language alignment model
Figure 2 for VLA-Mark: A cross modal watermark for large vision-language alignment model
Figure 3 for VLA-Mark: A cross modal watermark for large vision-language alignment model
Figure 4 for VLA-Mark: A cross modal watermark for large vision-language alignment model
Viaarxiv icon

Decoding Knowledge Attribution in Mixture-of-Experts: A Framework of Basic-Refinement Collaboration and Efficiency Analysis

Add code
May 30, 2025
Viaarxiv icon

SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning

Add code
Feb 18, 2025
Figure 1 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 2 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 3 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Figure 4 for SAFEERASER: Enhancing Safety in Multimodal Large Language Models through Multimodal Machine Unlearning
Viaarxiv icon

Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling

Add code
Oct 27, 2022
Figure 1 for Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Figure 2 for Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Figure 3 for Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Figure 4 for Unsupervised Boundary-Aware Language Model Pretraining for Chinese Sequence Labeling
Viaarxiv icon