Picture for Wanxiang Che

Wanxiang Che

In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks

Add code
Oct 02, 2024
Figure 1 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 2 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 3 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Figure 4 for In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks
Viaarxiv icon

Enabling Real-Time Conversations with Minimal Training Costs

Add code
Sep 18, 2024
Figure 1 for Enabling Real-Time Conversations with Minimal Training Costs
Figure 2 for Enabling Real-Time Conversations with Minimal Training Costs
Figure 3 for Enabling Real-Time Conversations with Minimal Training Costs
Figure 4 for Enabling Real-Time Conversations with Minimal Training Costs
Viaarxiv icon

What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices

Add code
Sep 03, 2024
Figure 1 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 2 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 3 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 4 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Viaarxiv icon

DAC: Decomposed Automation Correction for Text-to-SQL

Add code
Aug 16, 2024
Figure 1 for DAC: Decomposed Automation Correction for Text-to-SQL
Figure 2 for DAC: Decomposed Automation Correction for Text-to-SQL
Figure 3 for DAC: Decomposed Automation Correction for Text-to-SQL
Figure 4 for DAC: Decomposed Automation Correction for Text-to-SQL
Viaarxiv icon

FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats

Add code
Aug 16, 2024
Figure 1 for FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats
Figure 2 for FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats
Figure 3 for FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats
Figure 4 for FLEXTAF: Enhancing Table Reasoning with Flexible Tabular Formats
Viaarxiv icon

Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling

Add code
Aug 16, 2024
Figure 1 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 2 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 3 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Figure 4 for Turning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling
Viaarxiv icon

Concise and Precise Context Compression for Tool-Using Language Models

Add code
Jul 02, 2024
Figure 1 for Concise and Precise Context Compression for Tool-Using Language Models
Figure 2 for Concise and Precise Context Compression for Tool-Using Language Models
Figure 3 for Concise and Precise Context Compression for Tool-Using Language Models
Figure 4 for Concise and Precise Context Compression for Tool-Using Language Models
Viaarxiv icon

CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation

Add code
Jul 01, 2024
Figure 1 for CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation
Figure 2 for CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation
Figure 3 for CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation
Figure 4 for CVLUE: A New Benchmark Dataset for Chinese Vision-Language Understanding Evaluation
Viaarxiv icon

Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training

Add code
Jun 25, 2024
Figure 1 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 2 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 3 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Figure 4 for Make Some Noise: Unlocking Language Model Parallel Inference Capability through Noisy Training
Viaarxiv icon

Self-Constructed Context Decompilation with Fined-grained Alignment Enhancement

Add code
Jun 25, 2024
Viaarxiv icon