Picture for Guanhua Chen

Guanhua Chen

Automatic Robustness Stress Testing of LLMs as Mathematical Problem Solvers

Add code
Jun 05, 2025
Viaarxiv icon

TAG-INSTRUCT: Controlled Instruction Complexity Enhancement through Structure-based Augmentation

Add code
May 24, 2025
Viaarxiv icon

PlanGPT-VL: Enhancing Urban Planning with Domain-Specific Vision-Language Models

Add code
May 21, 2025
Viaarxiv icon

Code2Logic: Game-Code-Driven Data Synthesis for Enhancing VLMs General Reasoning

Add code
May 20, 2025
Viaarxiv icon

ImPart: Importance-Aware Delta-Sparsification for Improved Model Compression and Merging in LLMs

Add code
Apr 17, 2025
Viaarxiv icon

Not All LoRA Parameters Are Essential: Insights on Inference Necessity

Add code
Mar 30, 2025
Viaarxiv icon

Towards Lightweight, Adaptive and Attribute-Aware Multi-Aspect Controllable Text Generation with Large Language Models

Add code
Feb 19, 2025
Viaarxiv icon

LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy

Add code
Feb 17, 2025
Viaarxiv icon

Understanding Particles From Video: Property Estimation of Granular Materials via Visuo-Haptic Learning

Add code
Dec 03, 2024
Figure 1 for Understanding Particles From Video: Property Estimation of Granular Materials via Visuo-Haptic Learning
Figure 2 for Understanding Particles From Video: Property Estimation of Granular Materials via Visuo-Haptic Learning
Figure 3 for Understanding Particles From Video: Property Estimation of Granular Materials via Visuo-Haptic Learning
Figure 4 for Understanding Particles From Video: Property Estimation of Granular Materials via Visuo-Haptic Learning
Viaarxiv icon

Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions

Add code
Nov 15, 2024
Figure 1 for Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions
Figure 2 for Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions
Figure 3 for Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions
Figure 4 for Compound-QA: A Benchmark for Evaluating LLMs on Compound Questions
Viaarxiv icon