Picture for Jingang Wang

Jingang Wang

What's Wrong with Your Code Generated by Large Language Models? An Extensive Study

Add code
Jul 08, 2024
Viaarxiv icon

Rethinking LLM-based Preference Evaluation

Add code
Jul 01, 2024
Viaarxiv icon

EAVE: Efficient Product Attribute Value Extraction via Lightweight Sparse-layer Interaction

Add code
Jun 10, 2024
Figure 1 for EAVE: Efficient Product Attribute Value Extraction via Lightweight Sparse-layer Interaction
Figure 2 for EAVE: Efficient Product Attribute Value Extraction via Lightweight Sparse-layer Interaction
Figure 3 for EAVE: Efficient Product Attribute Value Extraction via Lightweight Sparse-layer Interaction
Figure 4 for EAVE: Efficient Product Attribute Value Extraction via Lightweight Sparse-layer Interaction
Viaarxiv icon

Speculative Decoding via Early-exiting for Faster LLM Inference with Thompson Sampling Control Mechanism

Add code
Jun 06, 2024
Viaarxiv icon

Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration

Add code
Apr 18, 2024
Figure 1 for Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
Figure 2 for Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
Figure 3 for Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
Figure 4 for Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration
Viaarxiv icon

What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation

Add code
Mar 11, 2024
Figure 1 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 2 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 3 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Figure 4 for What Makes Quantization for Large Language Models Hard? An Empirical Study from the Lens of Perturbation
Viaarxiv icon

Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection

Add code
Mar 04, 2024
Figure 1 for Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection
Figure 2 for Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection
Figure 3 for Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection
Figure 4 for Beyond the Known: Investigating LLMs Performance on Out-of-Domain Intent Detection
Viaarxiv icon

C-ICL: Contrastive In-context Learning for Information Extraction

Add code
Feb 17, 2024
Figure 1 for C-ICL: Contrastive In-context Learning for Information Extraction
Figure 2 for C-ICL: Contrastive In-context Learning for Information Extraction
Figure 3 for C-ICL: Contrastive In-context Learning for Information Extraction
Figure 4 for C-ICL: Contrastive In-context Learning for Information Extraction
Viaarxiv icon

DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning

Add code
Feb 14, 2024
Viaarxiv icon

Improving Input-label Mapping with Demonstration Replay for In-context Learning

Add code
Oct 30, 2023
Figure 1 for Improving Input-label Mapping with Demonstration Replay for In-context Learning
Figure 2 for Improving Input-label Mapping with Demonstration Replay for In-context Learning
Figure 3 for Improving Input-label Mapping with Demonstration Replay for In-context Learning
Figure 4 for Improving Input-label Mapping with Demonstration Replay for In-context Learning
Viaarxiv icon