Picture for Yingyan

Yingyan

Celine

MG-Verilog: Multi-grained Dataset Towards Enhanced LLM-assisted Verilog Generation

Add code
Jul 02, 2024
Viaarxiv icon

When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models

Add code
Jun 11, 2024
Figure 1 for When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Figure 2 for When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Figure 3 for When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Figure 4 for When Linear Attention Meets Autoregressive Decoding: Towards More Effective and Efficient Linearized Large Language Models
Viaarxiv icon

ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization

Add code
Jun 11, 2024
Viaarxiv icon

MixRT: Mixed Neural Representations For Real-Time NeRF Rendering

Add code
Dec 20, 2023
Viaarxiv icon

NetDistiller: Empowering Tiny Deep Learning via In-Situ Distillation

Add code
Oct 24, 2023
Viaarxiv icon

A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware

Add code
Jun 24, 2023
Figure 1 for A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware
Figure 2 for A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware
Figure 3 for A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware
Figure 4 for A Survey on Graph Neural Network Acceleration: Algorithms, Systems, and Customized Hardware
Viaarxiv icon

ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer

Add code
Jun 10, 2023
Figure 1 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 2 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 3 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Figure 4 for ShiftAddViT: Mixture of Multiplication Primitives Towards Efficient Vision Transformer
Viaarxiv icon