Picture for Vladimir Loncar

Vladimir Loncar

MIT

Reliable edge machine learning hardware for scientific applications

Add code
Jun 27, 2024
Viaarxiv icon

Gradient-based Automatic Per-Weight Mixed Precision Quantization for Neural Networks On-Chip

Add code
May 01, 2024
Viaarxiv icon

Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC

Add code
Feb 02, 2024
Figure 1 for Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC
Figure 2 for Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC
Figure 3 for Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC
Figure 4 for Sets are all you need: Ultrafast jet classification on FPGAs for HL-LHC
Viaarxiv icon

Ultra Fast Transformers on FPGAs for Particle Physics Experiments

Add code
Feb 01, 2024
Viaarxiv icon

SymbolNet: Neural Symbolic Regression with Adaptive Dynamic Pruning

Add code
Jan 18, 2024
Viaarxiv icon

FPGA Resource-aware Structured Pruning for Real-Time Neural Networks

Add code
Aug 09, 2023
Figure 1 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 2 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 3 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Figure 4 for FPGA Resource-aware Structured Pruning for Real-Time Neural Networks
Viaarxiv icon

Symbolic Regression on FPGAs for Fast Machine Learning Inference

Add code
May 06, 2023
Figure 1 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 2 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 3 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Figure 4 for Symbolic Regression on FPGAs for Fast Machine Learning Inference
Viaarxiv icon

Tailor: Altering Skip Connections for Resource-Efficient Inference

Add code
Jan 18, 2023
Figure 1 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 2 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 3 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Figure 4 for Tailor: Altering Skip Connections for Resource-Efficient Inference
Viaarxiv icon

Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml

Add code
Jul 01, 2022
Figure 1 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 2 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 3 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Figure 4 for Ultra-low latency recurrent neural network inference on FPGAs for physics applications with hls4ml
Viaarxiv icon

QONNX: Representing Arbitrary-Precision Quantized Neural Networks

Add code
Jun 17, 2022
Figure 1 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 2 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 3 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Figure 4 for QONNX: Representing Arbitrary-Precision Quantized Neural Networks
Viaarxiv icon