Picture for Jiing-Ping Wang

Jiing-Ping Wang

Andy

LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer

Add code
Apr 11, 2024
Figure 1 for LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer
Figure 2 for LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer
Figure 3 for LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer
Figure 4 for LATTE: Low-Precision Approximate Attention with Head-wise Trainable Threshold for Efficient Transformer
Viaarxiv icon