Picture for Yanyue Xie

Yanyue Xie

HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression

Add code
Apr 20, 2024
Figure 1 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 2 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 3 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Figure 4 for HybridFlow: Infusing Continuity into Masked Codebook for Extreme Low-Bitrate Image Compression
Viaarxiv icon

SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices

Add code
Sep 21, 2023
Figure 1 for SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Figure 2 for SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Figure 3 for SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Figure 4 for SupeRBNN: Randomized Binary Neural Network Using Adiabatic Superconductor Josephson Devices
Viaarxiv icon

Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training

Add code
Nov 19, 2022
Figure 1 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 2 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 3 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Figure 4 for Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision Transformer Training
Viaarxiv icon

HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers

Add code
Nov 15, 2022
Figure 1 for HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers
Figure 2 for HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers
Figure 3 for HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers
Figure 4 for HeatViT: Hardware-Efficient Adaptive Token Pruning for Vision Transformers
Viaarxiv icon

Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization

Add code
Aug 10, 2022
Figure 1 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 2 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 3 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Figure 4 for Auto-ViT-Acc: An FPGA-Aware Automatic Acceleration Framework for Vision Transformer with Mixed-Scheme Quantization
Viaarxiv icon