Picture for Sifan Zhou

Sifan Zhou

VisNec: Measuring and Leveraging Visual Necessity for Multimodal Instruction Tuning

Add code
Mar 01, 2026
Viaarxiv icon

FocusTrack: One-Stage Focus-and-Suppress Framework for 3D Point Cloud Object Tracking

Add code
Feb 27, 2026
Viaarxiv icon

CompTrack: Information Bottleneck-Guided Low-Rank Dynamic Token Compression for Point Cloud Tracking

Add code
Nov 19, 2025
Viaarxiv icon

FQ-PETR: Fully Quantized Position Embedding Transformation for Multi-View 3D Object Detection

Add code
Nov 14, 2025
Viaarxiv icon

RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization

Add code
May 02, 2025
Figure 1 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 2 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 3 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 4 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Viaarxiv icon

MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance

Add code
May 02, 2025
Figure 1 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 2 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 3 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 4 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Viaarxiv icon

GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning

Add code
Feb 18, 2025
Figure 1 for GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Figure 2 for GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Figure 3 for GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Figure 4 for GSQ-Tuning: Group-Shared Exponents Integer in Fully Quantized Training for LLMs On-Device Fine-tuning
Viaarxiv icon

GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting

Add code
Dec 19, 2024
Figure 1 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 2 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 3 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Figure 4 for GSRender: Deduplicated Occupancy Prediction via Weakly Supervised 3D Gaussian Splatting
Viaarxiv icon

MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual Cues

Add code
Dec 03, 2024
Figure 1 for MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual Cues
Figure 2 for MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual Cues
Figure 3 for MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual Cues
Figure 4 for MVCTrack: Boosting 3D Point Cloud Tracking via Multimodal-Guided Virtual Cues
Viaarxiv icon

PTQ4RIS: Post-Training Quantization for Referring Image Segmentation

Add code
Sep 25, 2024
Viaarxiv icon