Picture for Baobao Chang

Baobao Chang

PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling

Add code
Jun 04, 2024
Figure 1 for PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Figure 2 for PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Figure 3 for PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Figure 4 for PyramidKV: Dynamic KV Cache Compression based on Pyramidal Information Funneling
Viaarxiv icon

Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation

Add code
Apr 12, 2024
Figure 1 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 2 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 3 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Figure 4 for Mitigating Language-Level Performance Disparity in mPLMs via Teacher Language Selection and Cross-lingual Self-Distillation
Viaarxiv icon

An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models

Add code
Mar 11, 2024
Figure 1 for An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Figure 2 for An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Figure 3 for An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Figure 4 for An Image is Worth 1/2 Tokens After Layer 2: Plug-and-Play Inference Acceleration for Large Vision-Language Models
Viaarxiv icon

Improving Event Definition Following For Zero-Shot Event Detection

Add code
Mar 05, 2024
Viaarxiv icon

PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain

Add code
Feb 21, 2024
Figure 1 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 2 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 3 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Figure 4 for PCA-Bench: Evaluating Multimodal Large Language Models in Perception-Cognition-Action Chain
Viaarxiv icon

VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness

Add code
Jan 15, 2024
Figure 1 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 2 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 3 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Figure 4 for VeCAF: VLM-empowered Collaborative Active Finetuning with Training Objective Awareness
Viaarxiv icon

ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks

Add code
Nov 16, 2023
Figure 1 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 2 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 3 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Figure 4 for ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks
Viaarxiv icon

Distantly-Supervised Named Entity Recognition with Uncertainty-aware Teacher Learning and Student-student Collaborative Learning

Add code
Nov 14, 2023
Viaarxiv icon

Coarse-to-Fine Dual Encoders are Better Frame Identification Learners

Add code
Oct 20, 2023
Viaarxiv icon

Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond

Add code
Oct 16, 2023
Figure 1 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 2 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 3 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Figure 4 for Towards End-to-End Embodied Decision Making via Multi-modal Large Language Model: Explorations with GPT4-Vision and Beyond
Viaarxiv icon