Picture for Xing Hu

Xing Hu

QiMeng-SALV: Signal-Aware Learning for Verilog Code Generation

Add code
Oct 22, 2025
Viaarxiv icon

SecureAgentBench: Benchmarking Secure Code Generation under Realistic Vulnerability Scenarios

Add code
Sep 26, 2025
Viaarxiv icon

Reasoning Efficiently Through Adaptive Chain-of-Thought Compression: A Self-Optimizing Framework

Add code
Sep 17, 2025
Viaarxiv icon

Domain Adaptation in Agricultural Image Analysis: A Comprehensive Review from Shallow Models to Deep Learning

Add code
Jun 06, 2025
Viaarxiv icon

QiMeng: Fully Automated Hardware and Software Design for Processor Chip

Add code
Jun 05, 2025
Figure 1 for QiMeng: Fully Automated Hardware and Software Design for Processor Chip
Figure 2 for QiMeng: Fully Automated Hardware and Software Design for Processor Chip
Figure 3 for QiMeng: Fully Automated Hardware and Software Design for Processor Chip
Figure 4 for QiMeng: Fully Automated Hardware and Software Design for Processor Chip
Viaarxiv icon

CodeV-R1: Reasoning-Enhanced Verilog Generation

Add code
May 30, 2025
Viaarxiv icon

CODE-DITING: A Reasoning-Based Metric for Functional Alignment in Code Evaluation

Add code
May 26, 2025
Viaarxiv icon

Diffusion Model in Hyperspectral Image Processing and Analysis: A Review

Add code
May 16, 2025
Viaarxiv icon

RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization

Add code
May 02, 2025
Figure 1 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 2 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 3 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Figure 4 for RWKVQuant: Quantizing the RWKV Family with Proxy Guided Hybrid of Scalar and Vector Quantization
Viaarxiv icon

MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance

Add code
May 02, 2025
Figure 1 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 2 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 3 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Figure 4 for MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance
Viaarxiv icon