Alert button
Picture for Anshumali Shrivastava

Anshumali Shrivastava

Alert button

NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention

Add code
Bookmark button
Alert button
Mar 02, 2024
Tianyi Zhang, Jonah Wonkyu Yi, Bowen Yao, Zhaozhuo Xu, Anshumali Shrivastava

Figure 1 for NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Figure 2 for NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Figure 3 for NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Figure 4 for NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Viaarxiv icon

Wisdom of Committee: Distilling from Foundation Model to Specialized Application Model

Add code
Bookmark button
Alert button
Feb 27, 2024
Zichang Liu, Qingyun Liu, Yuening Li, Liang Liu, Anshumali Shrivastava, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao

Viaarxiv icon

Wisdom of Committee: Distilling from Foundation Model to SpecializedApplication Model

Add code
Bookmark button
Alert button
Feb 21, 2024
Zichang Liu, Qingyun Liu, Yuening Li, Liang Liu, Anshumali Shrivastava, Shuchao Bi, Lichan Hong, Ed H. Chi, Zhe Zhao

Viaarxiv icon

Learning Scalable Structural Representations for Link Prediction with Bloom Signatures

Add code
Bookmark button
Alert button
Dec 28, 2023
Tianyi Zhang, Haoteng Yin, Rongzhe Wei, Pan Li, Anshumali Shrivastava

Viaarxiv icon

Contractive error feedback for gradient compression

Add code
Bookmark button
Alert button
Dec 13, 2023
Bingcong Li, Shuai Zheng, Parameswaran Raman, Anshumali Shrivastava, Georgios B. Giannakis

Viaarxiv icon

Adaptive Sampling for Deep Learning via Efficient Nonparametric Proxies

Add code
Bookmark button
Alert button
Nov 22, 2023
Shabnam Daghaghi, Benjamin Coleman, Benito Geordie, Anshumali Shrivastava

Viaarxiv icon

Heterogeneous federated collaborative filtering using FAIR: Federated Averaging in Random Subspaces

Add code
Bookmark button
Alert button
Nov 03, 2023
Aditya Desai, Benjamin Meisburger, Zichang Liu, Anshumali Shrivastava

Viaarxiv icon

Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time

Add code
Bookmark button
Alert button
Oct 26, 2023
Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, Beidi Chen

Figure 1 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 2 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 3 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Figure 4 for Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time
Viaarxiv icon

In defense of parameter sharing for model-compression

Add code
Bookmark button
Alert button
Oct 17, 2023
Aditya Desai, Anshumali Shrivastava

Viaarxiv icon

Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training

Add code
Bookmark button
Alert button
Sep 23, 2023
Zhuang Wang, Zhaozhuo Xu, Anshumali Shrivastava, T. S. Eugene Ng

Figure 1 for Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training
Figure 2 for Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training
Figure 3 for Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training
Figure 4 for Zen: Near-Optimal Sparse Tensor Synchronization for Distributed DNN Training
Viaarxiv icon