Alert button
Picture for George A. Constantinides

George A. Constantinides

Alert button

Imperial College London

NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions

Add code
Bookmark button
Alert button
Feb 29, 2024
Marta Andronic, George A. Constantinides

Figure 1 for NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
Figure 2 for NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
Figure 3 for NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
Figure 4 for NeuraLUT: Hiding Neural Network Density in Boolean Synthesizable Functions
Viaarxiv icon

LQER: Low-Rank Quantization Error Reconstruction for LLMs

Add code
Bookmark button
Alert button
Feb 04, 2024
Cheng Zhang, Jianyi Cheng, George A. Constantinides, Yiren Zhao

Viaarxiv icon

Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?

Add code
Bookmark button
Alert button
Oct 21, 2023
Cheng Zhang, Jianyi Cheng, Ilia Shumailov, George A. Constantinides, Yiren Zhao

Figure 1 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 2 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 3 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Figure 4 for Revisiting Block-based Quantisation: What is Important for Sub-8-bit LLM Inference?
Viaarxiv icon

PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference

Add code
Bookmark button
Alert button
Sep 05, 2023
Marta Andronic, George A. Constantinides

Figure 1 for PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference
Figure 2 for PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference
Figure 3 for PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference
Figure 4 for PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference
Viaarxiv icon

FPGA Resource-aware Structured Pruning for Real-Time Neural Networks

Add code
Bookmark button
Alert button
Aug 09, 2023
Benjamin Ramhorst, George A. Constantinides, Vladimir Loncar

Figure 1 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 2 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 3 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 4 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Viaarxiv icon

ATHEENA: A Toolflow for Hardware Early-Exit Network Automation

Add code
Bookmark button
Alert button
Apr 17, 2023
Benjamin Biggs, Christos-Savvas Bouganis, George A. Constantinides

Figure 1 for ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
Figure 2 for ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
Figure 3 for ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
Figure 4 for ATHEENA: A Toolflow for Hardware Early-Exit Network Automation
Viaarxiv icon

Abstract Interpretation on E-Graphs

Add code
Bookmark button
Alert button
Mar 17, 2022
Samuel Coward, George A. Constantinides, Theo Drane

Figure 1 for Abstract Interpretation on E-Graphs
Figure 2 for Abstract Interpretation on E-Graphs
Viaarxiv icon

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

Add code
Bookmark button
Alert button
Jan 02, 2022
Erwei Wang, James J. Davis, Georgios-Ilias Stavrou, Peter Y. K. Cheung, George A. Constantinides, Mohamed S. Abdelfattah

Figure 1 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 2 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 3 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Figure 4 for Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference
Viaarxiv icon