Picture for Dacheng Tao

Dacheng Tao

and Other Contributors

CogMorph: Cognitive Morphing Attacks for Text-to-Image Models

Add code
Jan 21, 2025
Figure 1 for CogMorph: Cognitive Morphing Attacks for Text-to-Image Models
Figure 2 for CogMorph: Cognitive Morphing Attacks for Text-to-Image Models
Figure 3 for CogMorph: Cognitive Morphing Attacks for Text-to-Image Models
Figure 4 for CogMorph: Cognitive Morphing Attacks for Text-to-Image Models
Viaarxiv icon

Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging

Add code
Jan 16, 2025
Figure 1 for Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging
Figure 2 for Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging
Figure 3 for Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging
Figure 4 for Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging
Viaarxiv icon

Towards Robust and Realistic Human Pose Estimation via WiFi Signals

Add code
Jan 16, 2025
Figure 1 for Towards Robust and Realistic Human Pose Estimation via WiFi Signals
Figure 2 for Towards Robust and Realistic Human Pose Estimation via WiFi Signals
Figure 3 for Towards Robust and Realistic Human Pose Estimation via WiFi Signals
Figure 4 for Towards Robust and Realistic Human Pose Estimation via WiFi Signals
Viaarxiv icon

Modeling All Response Surfaces in One for Conditional Search Spaces

Add code
Jan 08, 2025
Figure 1 for Modeling All Response Surfaces in One for Conditional Search Spaces
Figure 2 for Modeling All Response Surfaces in One for Conditional Search Spaces
Figure 3 for Modeling All Response Surfaces in One for Conditional Search Spaces
Figure 4 for Modeling All Response Surfaces in One for Conditional Search Spaces
Viaarxiv icon

Free-Form Motion Control: A Synthetic Video Generation Dataset with Controllable Camera and Object Motions

Add code
Jan 03, 2025
Viaarxiv icon

Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search

Add code
Dec 24, 2024
Figure 1 for Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
Figure 2 for Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
Figure 3 for Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
Figure 4 for Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search
Viaarxiv icon

Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning

Add code
Dec 16, 2024
Figure 1 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 2 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 3 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Figure 4 for Red Pill and Blue Pill: Controllable Website Fingerprinting Defense via Dynamic Backdoor Learning
Viaarxiv icon

AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration

Add code
Dec 16, 2024
Figure 1 for AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration
Figure 2 for AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration
Figure 3 for AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration
Figure 4 for AsymRnR: Video Diffusion Transformers Acceleration with Asymmetric Reduction and Restoration
Viaarxiv icon

EMOv2: Pushing 5M Vision Model Frontier

Add code
Dec 09, 2024
Figure 1 for EMOv2: Pushing 5M Vision Model Frontier
Figure 2 for EMOv2: Pushing 5M Vision Model Frontier
Figure 3 for EMOv2: Pushing 5M Vision Model Frontier
Figure 4 for EMOv2: Pushing 5M Vision Model Frontier
Viaarxiv icon

Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs

Add code
Dec 03, 2024
Figure 1 for Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs
Figure 2 for Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs
Figure 3 for Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs
Figure 4 for Unlocking Tuning-Free Few-Shot Adaptability in Visual Foundation Models by Recycling Pre-Tuned LoRAs
Viaarxiv icon