Picture for Chi-Ying Tsui

Chi-Ying Tsui

A 28nm 0.22 μJ/token memory-compute-intensity-aware CNN-Transformer accelerator with hybrid-attention-based layer-fusion and cascaded pruning for semanticsegmentation

Add code
Dec 19, 2025
Viaarxiv icon

FedLAM: Low-latency Wireless Federated Learning via Layer-wise Adaptive Modulation

Add code
Oct 09, 2025
Viaarxiv icon

Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning

Add code
Nov 23, 2024
Figure 1 for Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning
Figure 2 for Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning
Figure 3 for Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning
Figure 4 for Partial Knowledge Distillation for Alleviating the Inherent Inter-Class Discrepancy in Federated Learning
Viaarxiv icon

FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization

Add code
Jun 26, 2024
Figure 1 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 2 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 3 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Figure 4 for FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
Viaarxiv icon

How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels

Add code
Oct 25, 2023
Figure 1 for How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Figure 2 for How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Figure 3 for How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Figure 4 for How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Viaarxiv icon

Step-GRAND: A Low Latency Universal Soft-input Decoder

Add code
Jul 27, 2023
Figure 1 for Step-GRAND: A Low Latency Universal Soft-input Decoder
Figure 2 for Step-GRAND: A Low Latency Universal Soft-input Decoder
Figure 3 for Step-GRAND: A Low Latency Universal Soft-input Decoder
Figure 4 for Step-GRAND: A Low Latency Universal Soft-input Decoder
Viaarxiv icon

A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications

Add code
Jul 19, 2023
Viaarxiv icon

FedDQ: Communication-Efficient Federated Learning with Descending Quantization

Add code
Oct 13, 2021
Figure 1 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 2 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 3 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Figure 4 for FedDQ: Communication-Efficient Federated Learning with Descending Quantization
Viaarxiv icon

Microshift: An Efficient Image Compression Algorithm for Hardware

Add code
Apr 20, 2021
Figure 1 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 2 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 3 for Microshift: An Efficient Image Compression Algorithm for Hardware
Figure 4 for Microshift: An Efficient Image Compression Algorithm for Hardware
Viaarxiv icon

Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation

Add code
Apr 03, 2021
Figure 1 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 2 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 3 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Figure 4 for Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation
Viaarxiv icon