Picture for Jie Ou

Jie Ou

AdapShot: Adaptive Many-Shot In-Context Learning with Semantic-Aware KV Cache Reuse

Add code
May 05, 2026
Viaarxiv icon

CAP: Controllable Alignment Prompting for Unlearning in LLMs

Add code
Apr 23, 2026
Viaarxiv icon

MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics

Add code
Oct 14, 2025
Figure 1 for MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics
Figure 2 for MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics
Figure 3 for MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics
Figure 4 for MTOS: A LLM-Driven Multi-topic Opinion Simulation Framework for Exploring Echo Chamber Dynamics
Viaarxiv icon

HASH-RAG: Bridging Deep Hashing with Retriever for Efficient, Fine Retrieval and Augmented Generation

Add code
May 22, 2025
Viaarxiv icon

Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps

Add code
May 19, 2025
Figure 1 for Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps
Figure 2 for Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps
Figure 3 for Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps
Figure 4 for Accelerating Adaptive Retrieval Augmented Generation via Instruction-Driven Representation Reduction of Retrieval Overlaps
Viaarxiv icon

Can LLM be a Good Path Planner based on Prompt Engineering? Mitigating the Hallucination for Path Planning

Add code
Aug 27, 2024
Viaarxiv icon

Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners

Add code
Jul 22, 2024
Figure 1 for Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Figure 2 for Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Figure 3 for Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Figure 4 for Compensate Quantization Errors+: Quantized Models Are Inquisitive Learners
Viaarxiv icon

Compensate Quantization Errors: Make Weights Hierarchical to Compensate Each Other

Add code
Jun 24, 2024
Viaarxiv icon

Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting

Add code
Apr 29, 2024
Figure 1 for Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting
Figure 2 for Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting
Figure 3 for Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting
Figure 4 for Bootstrap 3D Reconstructed Scenes from 3D Gaussian Splatting
Viaarxiv icon

Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding

Add code
Apr 10, 2024
Figure 1 for Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
Figure 2 for Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
Figure 3 for Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
Figure 4 for Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
Viaarxiv icon