Picture for Deming Chen

Deming Chen

Celine

Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS

Add code
Jan 22, 2024
Figure 1 for Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS
Figure 2 for Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS
Figure 3 for Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS
Figure 4 for Subgraph Extraction-based Feedback-guided Iterative Scheduling for HLS
Viaarxiv icon

Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads

Add code
Jan 19, 2024
Figure 1 for Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Figure 2 for Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Figure 3 for Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Figure 4 for Medusa: Simple LLM Inference Acceleration Framework with Multiple Decoding Heads
Viaarxiv icon

What Makes Convolutional Models Great on Long Sequence Modeling?

Add code
Oct 17, 2022
Figure 1 for What Makes Convolutional Models Great on Long Sequence Modeling?
Figure 2 for What Makes Convolutional Models Great on Long Sequence Modeling?
Figure 3 for What Makes Convolutional Models Great on Long Sequence Modeling?
Figure 4 for What Makes Convolutional Models Great on Long Sequence Modeling?
Viaarxiv icon

Extensible Proxy for Efficient NAS

Add code
Oct 17, 2022
Figure 1 for Extensible Proxy for Efficient NAS
Figure 2 for Extensible Proxy for Efficient NAS
Figure 3 for Extensible Proxy for Efficient NAS
Figure 4 for Extensible Proxy for Efficient NAS
Viaarxiv icon

HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation

Add code
Jul 22, 2022
Figure 1 for HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation
Figure 2 for HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation
Figure 3 for HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation
Figure 4 for HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation
Viaarxiv icon

ORB-based SLAM accelerator on SoC FPGA

Add code
Jul 18, 2022
Figure 1 for ORB-based SLAM accelerator on SoC FPGA
Figure 2 for ORB-based SLAM accelerator on SoC FPGA
Figure 3 for ORB-based SLAM accelerator on SoC FPGA
Figure 4 for ORB-based SLAM accelerator on SoC FPGA
Viaarxiv icon

Chimera: A Hybrid Machine Learning Driven Multi-Objective Design Space Exploration Tool for FPGA High-Level Synthesis

Add code
Jul 03, 2022
Viaarxiv icon

Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems

Add code
Jun 06, 2022
Figure 1 for Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems
Figure 2 for Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems
Figure 3 for Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems
Figure 4 for Efficient Machine Learning, Compilers, and Optimizations for Embedded Systems
Viaarxiv icon

Physics Community Needs, Tools, and Resources for Machine Learning

Add code
Mar 30, 2022
Figure 1 for Physics Community Needs, Tools, and Resources for Machine Learning
Figure 2 for Physics Community Needs, Tools, and Resources for Machine Learning
Figure 3 for Physics Community Needs, Tools, and Resources for Machine Learning
Figure 4 for Physics Community Needs, Tools, and Resources for Machine Learning
Viaarxiv icon

AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models

Add code
Jan 21, 2022
Figure 1 for AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models
Figure 2 for AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models
Figure 3 for AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models
Figure 4 for AutoDistill: an End-to-End Framework to Explore and Distill Hardware-Efficient Language Models
Viaarxiv icon