Picture for Ning Yu

Ning Yu

HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models

Add code
Apr 07, 2024
Figure 1 for HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models
Figure 2 for HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models
Figure 3 for HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models
Figure 4 for HaVTR: Improving Video-Text Retrieval Through Augmentation Using Large Foundation Models
Viaarxiv icon

Reference-Based 3D-Aware Image Editing with Triplane

Add code
Apr 04, 2024
Figure 1 for Reference-Based 3D-Aware Image Editing with Triplane
Figure 2 for Reference-Based 3D-Aware Image Editing with Triplane
Figure 3 for Reference-Based 3D-Aware Image Editing with Triplane
Figure 4 for Reference-Based 3D-Aware Image Editing with Triplane
Viaarxiv icon

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content

Add code
Mar 19, 2024
Viaarxiv icon

C-RAG: Certified Generation Risks for Retrieval-Augmented Language Models

Add code
Feb 12, 2024
Viaarxiv icon

Text2Data: Low-Resource Data Generation with Textual Control

Add code
Feb 08, 2024
Viaarxiv icon

Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models

Add code
Feb 05, 2024
Figure 1 for Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Figure 2 for Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Figure 3 for Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Figure 4 for Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models
Viaarxiv icon

Continual Adversarial Defense

Add code
Dec 15, 2023
Figure 1 for Continual Adversarial Defense
Figure 2 for Continual Adversarial Defense
Figure 3 for Continual Adversarial Defense
Figure 4 for Continual Adversarial Defense
Viaarxiv icon

X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning

Add code
Nov 30, 2023
Figure 1 for X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
Figure 2 for X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
Figure 3 for X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
Figure 4 for X-InstructBLIP: A Framework for aligning X-Modal instruction-aware representations to LLMs and Emergent Cross-modal Reasoning
Viaarxiv icon

AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors

Add code
Nov 03, 2023
Figure 1 for AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
Figure 2 for AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
Figure 3 for AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
Figure 4 for AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors
Viaarxiv icon

Generated Distributions Are All You Need for Membership Inference Attacks Against Generative Models

Add code
Oct 30, 2023
Viaarxiv icon