Picture for Jason Cong

Jason Cong

AI+HW 2035: Shaping the Next Decade

Add code
Mar 05, 2026
Viaarxiv icon

ARLArena: A Unified Framework for Stable Agentic Reinforcement Learning

Add code
Feb 25, 2026
Viaarxiv icon

FlexLLM: Composable HLS Library for Flexible Hybrid LLM Accelerator Design

Add code
Jan 22, 2026
Viaarxiv icon

Report for NSF Workshop on AI for Electronic Design Automation

Add code
Jan 20, 2026
Viaarxiv icon

LUT-LLM: Efficient Large Language Model Inference with Memory-based Computations on FPGAs

Add code
Nov 09, 2025
Viaarxiv icon

LLM-DSE: Searching Accelerator Parameters with LLM Agents

Add code
May 18, 2025
Figure 1 for LLM-DSE: Searching Accelerator Parameters with LLM Agents
Figure 2 for LLM-DSE: Searching Accelerator Parameters with LLM Agents
Figure 3 for LLM-DSE: Searching Accelerator Parameters with LLM Agents
Figure 4 for LLM-DSE: Searching Accelerator Parameters with LLM Agents
Viaarxiv icon

LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning

Add code
Apr 29, 2025
Figure 1 for LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning
Figure 2 for LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning
Figure 3 for LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning
Figure 4 for LIFT: LLM-Based Pragma Insertion for HLS via GNN Supervised Fine-Tuning
Viaarxiv icon

InTAR: Inter-Task Auto-Reconfigurable Accelerator Design for High Data Volume Variation in DNNs

Add code
Feb 12, 2025
Viaarxiv icon

Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis

Add code
Oct 25, 2024
Figure 1 for Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis
Figure 2 for Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis
Figure 3 for Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis
Figure 4 for Hierarchical Mixture of Experts: Generalizable Learning for High-Level Synthesis
Viaarxiv icon

Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference

Add code
Sep 25, 2024
Figure 1 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 2 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 3 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Figure 4 for Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference
Viaarxiv icon