Picture for Zhili Liu

Zhili Liu

Mixture of insighTful Experts : The Synergy of Thought Chains and Expert Mixtures in Self-Alignment

Add code
May 01, 2024
Viaarxiv icon

Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation

Add code
Mar 22, 2024
Figure 1 for Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation
Figure 2 for Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation
Figure 3 for Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation
Figure 4 for Eyes Closed, Safety On: Protecting Multimodal LLMs via Image-to-Text Transformation
Viaarxiv icon

MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric

Add code
Mar 12, 2024
Figure 1 for MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Figure 2 for MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Figure 3 for MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Figure 4 for MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
Viaarxiv icon

Task-customized Masked AutoEncoder via Mixture of Cluster-conditional Experts

Add code
Feb 08, 2024
Viaarxiv icon

PROXYQA: An Alternative Framework for Evaluating Long-Form Text Generation with Large Language Models

Add code
Jan 26, 2024
Viaarxiv icon

Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning

Add code
Dec 19, 2023
Figure 1 for Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning
Figure 2 for Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning
Figure 3 for Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning
Figure 4 for Mixture of Cluster-conditional LoRA Experts for Vision-language Instruction Tuning
Viaarxiv icon

TrackDiffusion: Multi-object Tracking Data Generation via Diffusion Models

Add code
Dec 01, 2023
Viaarxiv icon

Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models

Add code
Oct 13, 2023
Figure 1 for Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models
Figure 2 for Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models
Figure 3 for Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models
Figure 4 for Geom-Erasing: Geometry-Driven Removal of Implicit Concept in Diffusion Models
Viaarxiv icon

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

Add code
May 04, 2023
Figure 1 for DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Figure 2 for DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Figure 3 for DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Figure 4 for DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
Viaarxiv icon

Mixed Autoencoder for Self-supervised Visual Representation Learning

Add code
Mar 30, 2023
Figure 1 for Mixed Autoencoder for Self-supervised Visual Representation Learning
Figure 2 for Mixed Autoencoder for Self-supervised Visual Representation Learning
Figure 3 for Mixed Autoencoder for Self-supervised Visual Representation Learning
Figure 4 for Mixed Autoencoder for Self-supervised Visual Representation Learning
Viaarxiv icon