Picture for Xiaofeng Zhang

Xiaofeng Zhang

From Redundancy to Relevance: Enhancing Explainability in Multimodal Large Language Models

Add code
Jun 04, 2024
Viaarxiv icon

Wavelet-Decoupling Contrastive Enhancement Network for Fine-Grained Skeleton-Based Action Recognition

Add code
Feb 03, 2024
Viaarxiv icon

Building Lane-Level Maps from Aerial Images

Add code
Dec 20, 2023
Viaarxiv icon

Enlighten-Your-Voice: When Multimodal Meets Zero-shot Low-light Image Enhancement

Add code
Dec 15, 2023
Viaarxiv icon

How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey

Add code
Dec 08, 2023
Figure 1 for How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
Figure 2 for How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
Figure 3 for How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
Figure 4 for How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
Viaarxiv icon

A Dual Attentive Generative Adversarial Network for Remote Sensing Image Change Detection

Add code
Oct 03, 2023
Viaarxiv icon

Efficient Remote Sensing Segmentation With Generative Adversarial Transformer

Add code
Oct 02, 2023
Viaarxiv icon

Causal-Story: Local Causal Attention Utilizing Parameter-Efficient Tuning For Visual Story Synthesis

Add code
Sep 21, 2023
Figure 1 for Causal-Story: Local Causal Attention Utilizing Parameter-Efficient Tuning For Visual Story Synthesis
Figure 2 for Causal-Story: Local Causal Attention Utilizing Parameter-Efficient Tuning For Visual Story Synthesis
Figure 3 for Causal-Story: Local Causal Attention Utilizing Parameter-Efficient Tuning For Visual Story Synthesis
Figure 4 for Causal-Story: Local Causal Attention Utilizing Parameter-Efficient Tuning For Visual Story Synthesis
Viaarxiv icon

Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN

Add code
Aug 16, 2023
Figure 1 for Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN
Figure 2 for Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN
Figure 3 for Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN
Figure 4 for Improving Depth Gradient Continuity in Transformers: A Comparative Study on Monocular Depth Estimation with CNN
Viaarxiv icon

An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model

Add code
Jul 31, 2023
Figure 1 for An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model
Figure 2 for An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model
Figure 3 for An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model
Figure 4 for An Effective Data Creation Pipeline to Generate High-quality Financial Instruction Data for Large Language Model
Viaarxiv icon