Picture for Jingyao Li

Jingyao Li

MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models

Add code
Jun 20, 2024
Figure 1 for MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models
Figure 2 for MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models
Figure 3 for MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models
Figure 4 for MR-BEN: A Comprehensive Meta-Reasoning Benchmark for Large Language Models
Viaarxiv icon

QuickLLaMA: Query-aware Inference Acceleration for Large Language Models

Add code
Jun 11, 2024
Figure 1 for QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
Figure 2 for QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
Figure 3 for QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
Figure 4 for QuickLLaMA: Query-aware Inference Acceleration for Large Language Models
Viaarxiv icon

RoboCoder: Robotic Learning from Basic Skills to General Tasks with Large Language Models

Add code
Jun 06, 2024
Viaarxiv icon

CAPE: Context-Adaptive Positional Encoding for Length Extrapolation

Add code
May 23, 2024
Figure 1 for CAPE: Context-Adaptive Positional Encoding for Length Extrapolation
Figure 2 for CAPE: Context-Adaptive Positional Encoding for Length Extrapolation
Figure 3 for CAPE: Context-Adaptive Positional Encoding for Length Extrapolation
Figure 4 for CAPE: Context-Adaptive Positional Encoding for Length Extrapolation
Viaarxiv icon

VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning

Add code
Feb 22, 2024
Figure 1 for VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning
Figure 2 for VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning
Figure 3 for VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning
Figure 4 for VLPose: Bridging the Domain Gap in Pose Estimation with Language-Vision Tuning
Viaarxiv icon

MoTCoder: Elevating Large Language Models with Modular of Thought for Challenging Programming Tasks

Add code
Jan 05, 2024
Viaarxiv icon

MOODv2: Masked Image Modeling for Out-of-Distribution Detection

Add code
Jan 05, 2024
Figure 1 for MOODv2: Masked Image Modeling for Out-of-Distribution Detection
Figure 2 for MOODv2: Masked Image Modeling for Out-of-Distribution Detection
Figure 3 for MOODv2: Masked Image Modeling for Out-of-Distribution Detection
Figure 4 for MOODv2: Masked Image Modeling for Out-of-Distribution Detection
Viaarxiv icon

BAL: Balancing Diversity and Novelty for Active Learning

Add code
Dec 26, 2023
Figure 1 for BAL: Balancing Diversity and Novelty for Active Learning
Figure 2 for BAL: Balancing Diversity and Novelty for Active Learning
Figure 3 for BAL: Balancing Diversity and Novelty for Active Learning
Figure 4 for BAL: Balancing Diversity and Novelty for Active Learning
Viaarxiv icon

TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation

Add code
Apr 15, 2023
Figure 1 for TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation
Figure 2 for TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation
Figure 3 for TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation
Figure 4 for TagCLIP: Improving Discrimination Ability of Open-Vocabulary Semantic Segmentation
Viaarxiv icon

Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need

Add code
Feb 06, 2023
Figure 1 for Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need
Figure 2 for Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need
Figure 3 for Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need
Figure 4 for Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need
Viaarxiv icon