Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

UDC: Unified DNAS for Compressible TinyML Models


Jan 21, 2022
Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul Whatmough

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Federated Learning Based on Dynamic Regularization


Nov 09, 2021
Durmus Alp Emre Acar, Yue Zhao, Ramon Matas Navarro, Matthew Mattina, Paul N. Whatmough, Venkatesh Saligrama

Add code

* Slightly extended version of ICLR 2021 Paper 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification


Aug 13, 2021
Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji

Add code

* 8 pages. Accepted to the Deep Learning for Geometric Computing Workshop at ICCV 2021 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

S2TA: Exploiting Structured Sparsity for Energy-Efficient Mobile CNN Acceleration


Jul 16, 2021
Zhi-Gang Liu, Paul N. Whatmough, Yuhao Zhu, Matthew Mattina

Add code


   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks


Feb 22, 2021
Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues

Add code

* Code at: https://github.com/martinferianc/quantised-bayesian-nets 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Doping: A technique for efficient compression of LSTM models using sparse structured additive matrices


Feb 14, 2021
Urmish Thakker, Paul N. Whatmough, Zhigang Liu, Matthew Mattina, Jesse Beu

Add code

* Accepted to be published at MLSys 2021 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email

Information contraction in noisy binary neural networks and its implications


Feb 01, 2021
Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough

Add code

* 14 pages, 8 figures 

   Access Paper or Ask Questions

  • Share via Twitter
  • Share via Facebook
  • Share via LinkedIn
  • Share via Whatsapp
  • Share via Messenger
  • Share via Email
1
2
3
4
>>