Picture for Vladimir Loncar

Vladimir Loncar

Massachusetts Institute of Technology

Gradient-based Automatic Per-Weight Mixed Precision Quantization for Neural Networks On-Chip

Add code
May 01, 2024
Viaarxiv icon

Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC

Feb 02, 2024
Viaarxiv icon

Ultra Fast Transformers on FPGAs for Particle Physics Experiments

Add code
Feb 01, 2024
Viaarxiv icon

SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning

Add code
Jan 18, 2024
Viaarxiv icon

FPGA Resource-aware Structured Pruning for Real-Time Neural Networks

Add code
Aug 09, 2023
Figure 1 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 2 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 3 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 4 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Viaarxiv icon

Symbolic Regression on FPGAs for Fast Machine Learning Inference

Add code
May 06, 2023
Figure 1 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 2 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 3 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 4 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Viaarxiv icon

Tailor: Altering Skip Connections for Resource-Efficient Inference

Jan 18, 2023
Figure 1 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 2 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 3 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 4 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Viaarxiv icon

Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml

Add code
Jul 01, 2022
Figure 1 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 2 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 3 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 4 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Viaarxiv icon

QONNX: Representing Arbitrary-Precision Quantized Neural Networks

Add code
Jun 17, 2022
Figure 1 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 2 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 3 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 4 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Viaarxiv icon

Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml

May 16, 2022
Figure 1 for Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
Figure 2 for Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
Figure 3 for Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
Figure 4 for Real-time semantic segmentation on FPGAs for autonomous vehicles with hls4ml
Viaarxiv icon