Picture for Yiren Zhao

Yiren Zhao

$Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers

Add code
Jun 03, 2024
Figure 1 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 2 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 3 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Figure 4 for $Δ$-DiT: A Training-Free Acceleration Method Tailored for Diffusion Transformers
Viaarxiv icon

Locking Machine Learning Models into Hardware

Add code
May 31, 2024
Viaarxiv icon

Enhancing Real-World Complex Network Representations with Hyperedge Augmentation

Add code
Feb 20, 2024
Figure 1 for Enhancing Real-World Complex Network Representations with Hyperedge Augmentation
Figure 2 for Enhancing Real-World Complex Network Representations with Hyperedge Augmentation
Figure 3 for Enhancing Real-World Complex Network Representations with Hyperedge Augmentation
Figure 4 for Enhancing Real-World Complex Network Representations with Hyperedge Augmentation
Viaarxiv icon

Architectural Neural Backdoors from First Principles

Add code
Feb 10, 2024
Figure 1 for Architectural Neural Backdoors from First Principles
Figure 2 for Architectural Neural Backdoors from First Principles
Figure 3 for Architectural Neural Backdoors from First Principles
Figure 4 for Architectural Neural Backdoors from First Principles
Viaarxiv icon

DiscDiff: Latent Diffusion Model for DNA Sequence Generation

Add code
Feb 08, 2024
Figure 1 for DiscDiff: Latent Diffusion Model for DNA Sequence Generation
Figure 2 for DiscDiff: Latent Diffusion Model for DNA Sequence Generation
Figure 3 for DiscDiff: Latent Diffusion Model for DNA Sequence Generation
Figure 4 for DiscDiff: Latent Diffusion Model for DNA Sequence Generation
Viaarxiv icon

LQER: Low-Rank Quantization Error Reconstruction for LLMs

Add code
Feb 04, 2024
Viaarxiv icon

Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?

Add code
Oct 21, 2023
Figure 1 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 2 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 3 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 4 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Viaarxiv icon

Latent Diffusion Model for DNA Sequence Generation

Add code
Oct 09, 2023
Figure 1 for Latent Diffusion Model for DNA Sequence Generation
Figure 2 for Latent Diffusion Model for DNA Sequence Generation
Figure 3 for Latent Diffusion Model for DNA Sequence Generation
Figure 4 for Latent Diffusion Model for DNA Sequence Generation
Viaarxiv icon

LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation

Add code
Oct 06, 2023
Figure 1 for LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation
Figure 2 for LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation
Figure 3 for LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation
Figure 4 for LLM4DV: Using Large Language Models for Hardware Test Stimuli Generation
Viaarxiv icon

MiliPoint: A Point Cloud Dataset for mmWave Radar

Add code
Sep 23, 2023
Figure 1 for MiliPoint: A Point Cloud Dataset for mmWave Radar
Figure 2 for MiliPoint: A Point Cloud Dataset for mmWave Radar
Figure 3 for MiliPoint: A Point Cloud Dataset for mmWave Radar
Figure 4 for MiliPoint: A Point Cloud Dataset for mmWave Radar
Viaarxiv icon