Picture for Pan Zhou

Pan Zhou

The Hubei Engineering Research Center on Big Data Security, School of Cyber Science and Engineering, Huazhong University of Science and Technology

Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector

Add code
Oct 30, 2024
Figure 1 for Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
Figure 2 for Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
Figure 3 for Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
Figure 4 for Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector
Viaarxiv icon

Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation

Add code
Oct 29, 2024
Figure 1 for Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation
Figure 2 for Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation
Figure 3 for Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation
Figure 4 for Unsupervised Modality Adaptation with Text-to-Image Diffusion Models for Semantic Segmentation
Viaarxiv icon

Two are better than one: Context window extension with multi-grained self-injection

Add code
Oct 25, 2024
Figure 1 for Two are better than one: Context window extension with multi-grained self-injection
Figure 2 for Two are better than one: Context window extension with multi-grained self-injection
Figure 3 for Two are better than one: Context window extension with multi-grained self-injection
Figure 4 for Two are better than one: Context window extension with multi-grained self-injection
Viaarxiv icon

Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning

Add code
Oct 15, 2024
Figure 1 for Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Figure 2 for Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Figure 3 for Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Figure 4 for Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Viaarxiv icon

SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning

Add code
Oct 11, 2024
Figure 1 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 2 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 3 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Figure 4 for SubZero: Random Subspace Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning
Viaarxiv icon

Towards Natural Image Matting in the Wild via Real-Scenario Prior

Add code
Oct 09, 2024
Figure 1 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 2 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 3 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Figure 4 for Towards Natural Image Matting in the Wild via Real-Scenario Prior
Viaarxiv icon

The Impact of Large Language Models in Academia: from Writing to Speaking

Add code
Sep 20, 2024
Figure 1 for The Impact of Large Language Models in Academia: from Writing to Speaking
Figure 2 for The Impact of Large Language Models in Academia: from Writing to Speaking
Figure 3 for The Impact of Large Language Models in Academia: from Writing to Speaking
Figure 4 for The Impact of Large Language Models in Academia: from Writing to Speaking
Viaarxiv icon

LPT++: Efficient Training on Mixture of Long-tailed Experts

Add code
Sep 17, 2024
Figure 1 for LPT++: Efficient Training on Mixture of Long-tailed Experts
Figure 2 for LPT++: Efficient Training on Mixture of Long-tailed Experts
Figure 3 for LPT++: Efficient Training on Mixture of Long-tailed Experts
Figure 4 for LPT++: Efficient Training on Mixture of Long-tailed Experts
Viaarxiv icon

MoExtend: Tuning New Experts for Modality and Task Extension

Add code
Aug 07, 2024
Figure 1 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 2 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 3 for MoExtend: Tuning New Experts for Modality and Task Extension
Figure 4 for MoExtend: Tuning New Experts for Modality and Task Extension
Viaarxiv icon

Can Large Language Models Automatically Jailbreak GPT-4V?

Add code
Jul 23, 2024
Viaarxiv icon