Picture for Jian Lou

Jian Lou

MsFormer: Enabling Robust Predictive Maintenance Services for Industrial Devices

Add code
Mar 24, 2026
Viaarxiv icon

Time Series Reasoning via Process-Verifiable Thinking Data Synthesis and Scheduling for Tailored LLM Reasoning

Add code
Feb 08, 2026
Viaarxiv icon

Understanding and Preserving Safety in Fine-Tuned LLMs

Add code
Jan 15, 2026
Viaarxiv icon

Safety at One Shot: Patching Fine-Tuned LLMs with A Single Instance

Add code
Jan 06, 2026
Viaarxiv icon

Lightweight Time Series Data Valuation on Time Series Foundation Models via In-Context Finetuning

Add code
Nov 10, 2025
Viaarxiv icon

Module-Aware Parameter-Efficient Machine Unlearning on Transformers

Add code
Aug 24, 2025
Viaarxiv icon

Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment

Add code
Jun 10, 2025
Figure 1 for Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment
Figure 2 for Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment
Figure 3 for Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment
Figure 4 for Safeguarding Multimodal Knowledge Copyright in the RAG-as-a-Service Environment
Viaarxiv icon

SHAPE : Self-Improved Visual Preference Alignment by Iteratively Generating Holistic Winner

Add code
Mar 06, 2025
Figure 1 for SHAPE : Self-Improved Visual Preference Alignment by Iteratively Generating Holistic Winner
Figure 2 for SHAPE : Self-Improved Visual Preference Alignment by Iteratively Generating Holistic Winner
Figure 3 for SHAPE : Self-Improved Visual Preference Alignment by Iteratively Generating Holistic Winner
Figure 4 for SHAPE : Self-Improved Visual Preference Alignment by Iteratively Generating Holistic Winner
Viaarxiv icon

SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models

Add code
Feb 02, 2025
Figure 1 for SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
Figure 2 for SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
Figure 3 for SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
Figure 4 for SecPE: Secure Prompt Ensembling for Private and Robust Large Language Models
Viaarxiv icon

Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense

Add code
Feb 02, 2025
Figure 1 for Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense
Figure 2 for Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense
Figure 3 for Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense
Figure 4 for Activation Approximations Can Incur Safety Vulnerabilities Even in Aligned LLMs: Comprehensive Analysis and Defense
Viaarxiv icon