Picture for Hai Li

Hai Li

Callie

XEmoRAG: Cross-Lingual Emotion Transfer with Controllable Intensity Using Retrieval-Augmented Generation

Add code
Aug 12, 2025
Viaarxiv icon

SADA: Stability-guided Adaptive Diffusion Acceleration

Add code
Jul 23, 2025
Viaarxiv icon

FLAT-LLM: Fine-grained Low-rank Activation Space Transformation for Large Language Model Compression

Add code
May 29, 2025
Viaarxiv icon

Weakly Supervised Data Refinement and Flexible Sequence Compression for Efficient Thai LLM-based ASR

Add code
May 28, 2025
Viaarxiv icon

DeepOHeat-v1: Efficient Operator Learning for Fast and Trustworthy Thermal Simulation and Optimization in 3D-IC Design

Add code
Apr 04, 2025
Viaarxiv icon

Keyframe-oriented Vision Token Pruning: Enhancing Efficiency of Large Vision Language Models on Long-Form Video Processing

Add code
Mar 13, 2025
Viaarxiv icon

NeuraLoc: Visual Localization in Neural Implicit Map with Dual Complementary Features

Add code
Mar 08, 2025
Viaarxiv icon

Proactive Privacy Amnesia for Large Language Models: Safeguarding PII with Negligible Impact on Model Utility

Add code
Feb 24, 2025
Viaarxiv icon

H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking

Add code
Feb 18, 2025
Viaarxiv icon

Hamming Attention Distillation: Binarizing Keys and Queries for Efficient Long-Context Transformers

Add code
Feb 03, 2025
Figure 1 for Hamming Attention Distillation: Binarizing Keys and Queries for Efficient Long-Context Transformers
Figure 2 for Hamming Attention Distillation: Binarizing Keys and Queries for Efficient Long-Context Transformers
Figure 3 for Hamming Attention Distillation: Binarizing Keys and Queries for Efficient Long-Context Transformers
Figure 4 for Hamming Attention Distillation: Binarizing Keys and Queries for Efficient Long-Context Transformers
Viaarxiv icon