Picture for Yang Sui

Yang Sui

BitsFusion: 1.99 bits Weight Quantization of Diffusion Model

Add code
Jun 06, 2024
Figure 1 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 2 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 3 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Figure 4 for BitsFusion: 1.99 bits Weight Quantization of Diffusion Model
Viaarxiv icon

Combining Experimental and Historical Data for Policy Evaluation

Add code
Jun 01, 2024
Viaarxiv icon

DisDet: Exploring Detectability of Backdoor Attack on Diffusion Models

Add code
Feb 05, 2024
Viaarxiv icon

ELRT: Efficient Low-Rank Training for Compact Convolutional Neural Networks

Add code
Jan 18, 2024
Viaarxiv icon

Transferable Learned Image Compression-Resistant Adversarial Perturbations

Add code
Jan 06, 2024
Viaarxiv icon

In-Sensor Radio Frequency Computing for Energy-Efficient Intelligent Radar

Add code
Dec 16, 2023
Viaarxiv icon

Corner-to-Center Long-range Context Model for Efficient Learned Image Compression

Add code
Nov 29, 2023
Viaarxiv icon

Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations

Add code
Jun 01, 2023
Figure 1 for Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations
Figure 2 for Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations
Figure 3 for Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations
Figure 4 for Reconstruction Distortion of Learned Image Compression with Imperceptible Perturbations
Viaarxiv icon

HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks

Add code
Jan 20, 2023
Figure 1 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 2 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 3 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Figure 4 for HALOC: Hardware-Aware Automatic Low-Rank Compression for Compact Neural Networks
Viaarxiv icon

Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition

Add code
Dec 05, 2022
Figure 1 for Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Figure 2 for Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Figure 3 for Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Figure 4 for Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition
Viaarxiv icon