Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

Picture for Amitabh Basu

Leveraging semantically similar queries for ranking via combining representations


Jun 23, 2021
Hayden S. Helm, Marah Abdin, Benjamin D. Pedigo, Shweti Mahajan, Vince Lyzinski, Youngser Park, Amitabh Basu, Piali~Choudhury, Christopher M. White, Weiwei Yang, Carey E. Priebe


  Access Paper or Ask Questions

Towards Lower Bounds on the Depth of ReLU Neural Networks


May 31, 2021
Christoph Hertrich, Amitabh Basu, Marco Di Summa, Martin Skutella


  Access Paper or Ask Questions

Learning to rank via combining representations


May 20, 2020
Hayden S. Helm, Amitabh Basu, Avanti Athreya, Youngser Park, Joshua T. Vogelstein, Michael Winding, Marta Zlatic, Albert Cardona, Patrick Bourke, Jonathan Larson, Chris White, Carey E. Priebe

* 10 pages, 4 figures 

  Access Paper or Ask Questions

Understanding Deep Neural Networks with Rectified Linear Units


Feb 28, 2018
Raman Arora, Amitabh Basu, Poorya Mianjy, Anirbit Mukherjee

* ICLR 2028 
* The poly(data) exact training algorithm has been improved to now be applicable to any single hidden layer R^n-> R ReLU DNN and there is a cleaner pseudocode for it given on page 8. Also now on page 7 there is a more precise description about when and how the Zonotope construction improves on the Theorem 4 of this paper, arXiv:1402.1869 

  Access Paper or Ask Questions

Lower bounds over Boolean inputs for deep neural networks with ReLU gates


Nov 09, 2017
Anirbit Mukherjee, Amitabh Basu


  Access Paper or Ask Questions

Sparse Coding and Autoencoders


Oct 20, 2017
Akshay Rangamani, Anirbit Mukherjee, Amitabh Basu, Tejaswini Ganapathy, Ashish Arora, Sang Chin, Trac D. Tran

* In this new version of the paper with a small change in the distributional assumptions we are actually able to prove the asymptotic criticality of a neighbourhood of the ground truth dictionary for even just the standard squared loss of the ReLU autoencoder (unlike the regularized loss in the older version) 

  Access Paper or Ask Questions