Picture for Hang Yan

Hang Yan

What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices

Add code
Sep 03, 2024
Figure 1 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 2 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 3 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Figure 4 for What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices
Viaarxiv icon

Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope

Add code
Jul 21, 2024
Figure 1 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 2 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 3 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Figure 4 for Farewell to Length Extrapolation, a Training-Free Infinite Context with Finite Attention Scope
Viaarxiv icon

Case2Code: Learning Inductive Reasoning with Synthetic Data

Add code
Jul 17, 2024
Figure 1 for Case2Code: Learning Inductive Reasoning with Synthetic Data
Figure 2 for Case2Code: Learning Inductive Reasoning with Synthetic Data
Figure 3 for Case2Code: Learning Inductive Reasoning with Synthetic Data
Figure 4 for Case2Code: Learning Inductive Reasoning with Synthetic Data
Viaarxiv icon

InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output

Add code
Jul 03, 2024
Figure 1 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 2 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 3 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Figure 4 for InternLM-XComposer-2.5: A Versatile Large Vision Language Model Supporting Long-Contextual Input and Output
Viaarxiv icon

Unified Active Retrieval for Retrieval Augmented Generation

Add code
Jun 18, 2024
Viaarxiv icon

AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data

Add code
May 29, 2024
Viaarxiv icon

How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites

Add code
Apr 29, 2024
Viaarxiv icon

FoundaBench: Evaluating Chinese Fundamental Knowledge Capabilities of Large Language Models

Add code
Apr 29, 2024
Viaarxiv icon

Length Generalization of Causal Transformers without Position Encoding

Add code
Apr 18, 2024
Figure 1 for Length Generalization of Causal Transformers without Position Encoding
Figure 2 for Length Generalization of Causal Transformers without Position Encoding
Figure 3 for Length Generalization of Causal Transformers without Position Encoding
Figure 4 for Length Generalization of Causal Transformers without Position Encoding
Viaarxiv icon

InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD

Add code
Apr 09, 2024
Figure 1 for InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Figure 2 for InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Figure 3 for InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Figure 4 for InternLM-XComposer2-4KHD: A Pioneering Large Vision-Language Model Handling Resolutions from 336 Pixels to 4K HD
Viaarxiv icon