Picture for Fei Tan

Fei Tan

What Makes Good Few-shot Examples for Vision-Language Models?

Add code
May 22, 2024
Viaarxiv icon

Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model

Add code
Apr 16, 2024
Figure 1 for Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model
Figure 2 for Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model
Figure 3 for Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model
Figure 4 for Balancing Speciality and Versatility: a Coarse to Fine Framework for Supervised Fine-tuning Large Language Model
Viaarxiv icon

Consistency Matters: Explore LLMs Consistency From a Black-Box Perspective

Add code
Mar 02, 2024
Figure 1 for Consistency Matters: Explore LLMs Consistency From a Black-Box Perspective
Figure 2 for Consistency Matters: Explore LLMs Consistency From a Black-Box Perspective
Figure 3 for Consistency Matters: Explore LLMs Consistency From a Black-Box Perspective
Figure 4 for Consistency Matters: Explore LLMs Consistency From a Black-Box Perspective
Viaarxiv icon

What Large Language Models Bring to Text-rich VQA?

Add code
Nov 13, 2023
Figure 1 for What Large Language Models Bring to Text-rich VQA?
Figure 2 for What Large Language Models Bring to Text-rich VQA?
Figure 3 for What Large Language Models Bring to Text-rich VQA?
Figure 4 for What Large Language Models Bring to Text-rich VQA?
Viaarxiv icon

Deeply Coupled Cross-Modal Prompt Learning

Add code
May 30, 2023
Figure 1 for Deeply Coupled Cross-Modal Prompt Learning
Figure 2 for Deeply Coupled Cross-Modal Prompt Learning
Figure 3 for Deeply Coupled Cross-Modal Prompt Learning
Figure 4 for Deeply Coupled Cross-Modal Prompt Learning
Viaarxiv icon

High-fidelity Direct Contrast Synthesis from Magnetic Resonance Fingerprinting

Add code
Dec 21, 2022
Figure 1 for High-fidelity Direct Contrast Synthesis from Magnetic Resonance Fingerprinting
Figure 2 for High-fidelity Direct Contrast Synthesis from Magnetic Resonance Fingerprinting
Figure 3 for High-fidelity Direct Contrast Synthesis from Magnetic Resonance Fingerprinting
Figure 4 for High-fidelity Direct Contrast Synthesis from Magnetic Resonance Fingerprinting
Viaarxiv icon

PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets

Add code
Nov 27, 2022
Figure 1 for PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets
Figure 2 for PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets
Figure 3 for PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets
Figure 4 for PUnifiedNER: a Prompting-based Unified NER System for Diverse Datasets
Viaarxiv icon

SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning

Add code
Oct 08, 2022
Figure 1 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 2 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 3 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Figure 4 for SDA: Simple Discrete Augmentation for Contrastive Sentence Representation Learning
Viaarxiv icon

What Makes Pre-trained Language Models Better Zero/Few-shot Learners?

Add code
Sep 30, 2022
Figure 1 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 2 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 3 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Figure 4 for What Makes Pre-trained Language Models Better Zero/Few-shot Learners?
Viaarxiv icon

MetaCon: Unified Predictive Segments System with Trillion Concept Meta-Learning

Add code
Mar 09, 2022
Figure 1 for MetaCon: Unified Predictive Segments System with Trillion Concept Meta-Learning
Figure 2 for MetaCon: Unified Predictive Segments System with Trillion Concept Meta-Learning
Figure 3 for MetaCon: Unified Predictive Segments System with Trillion Concept Meta-Learning
Figure 4 for MetaCon: Unified Predictive Segments System with Trillion Concept Meta-Learning
Viaarxiv icon