Alert button
Picture for Chong You

Chong You

Alert button

HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM Inference

Add code
Bookmark button
Alert button
Feb 14, 2024
Yashas Samaga B L, Varun Yerram, Chong You, Srinadh Bhojanapalli, Sanjiv Kumar, Prateek Jain, Praneeth Netrapalli

Viaarxiv icon

Generalized Neural Collapse for a Large Number of Classes

Add code
Bookmark button
Alert button
Oct 15, 2023
Jiachen Jiang, Jinxin Zhou, Peng Wang, Qing Qu, Dustin Mixon, Chong You, Zhihui Zhu

Figure 1 for Generalized Neural Collapse for a Large Number of Classes
Figure 2 for Generalized Neural Collapse for a Large Number of Classes
Figure 3 for Generalized Neural Collapse for a Large Number of Classes
Figure 4 for Generalized Neural Collapse for a Large Number of Classes
Viaarxiv icon

It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep Models

Add code
Bookmark button
Alert button
Oct 13, 2023
Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar

Viaarxiv icon

Functional Interpolation for Relative Positions Improves Long Context Transformers

Add code
Bookmark button
Alert button
Oct 06, 2023
Shanda Li, Chong You, Guru Guruganesh, Joshua Ainslie, Santiago Ontanon, Manzil Zaheer, Sumit Sanghai, Yiming Yang, Sanjiv Kumar, Srinadh Bhojanapalli

Viaarxiv icon

Revisiting Sparse Convolutional Model for Visual Recognition

Add code
Bookmark button
Alert button
Oct 24, 2022
Xili Dai, Mingyang Li, Pengyuan Zhai, Shengbang Tong, Xingjian Gao, Shao-Lun Huang, Zhihui Zhu, Chong You, Yi Ma

Figure 1 for Revisiting Sparse Convolutional Model for Visual Recognition
Figure 2 for Revisiting Sparse Convolutional Model for Visual Recognition
Figure 3 for Revisiting Sparse Convolutional Model for Visual Recognition
Figure 4 for Revisiting Sparse Convolutional Model for Visual Recognition
Viaarxiv icon

Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers

Add code
Bookmark button
Alert button
Oct 12, 2022
Zonglin Li, Chong You, Srinadh Bhojanapalli, Daliang Li, Ankit Singh Rawat, Sashank J. Reddi, Ke Ye, Felix Chern, Felix Yu, Ruiqi Guo, Sanjiv Kumar

Figure 1 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 2 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 3 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Figure 4 for Large Models are Parsimonious Learners: Activation Sparsity in Trained Transformers
Viaarxiv icon

Are All Losses Created Equal: A Neural Collapse Perspective

Add code
Bookmark button
Alert button
Oct 08, 2022
Jinxin Zhou, Chong You, Xiao Li, Kangning Liu, Sheng Liu, Qing Qu, Zhihui Zhu

Figure 1 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 2 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 3 for Are All Losses Created Equal: A Neural Collapse Perspective
Figure 4 for Are All Losses Created Equal: A Neural Collapse Perspective
Viaarxiv icon

Teacher Guided Training: An Efficient Framework for Knowledge Transfer

Add code
Bookmark button
Alert button
Aug 14, 2022
Manzil Zaheer, Ankit Singh Rawat, Seungyeon Kim, Chong You, Himanshu Jain, Andreas Veit, Rob Fergus, Sanjiv Kumar

Figure 1 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 2 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 3 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Figure 4 for Teacher Guided Training: An Efficient Framework for Knowledge Transfer
Viaarxiv icon

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

Add code
Bookmark button
Alert button
Mar 12, 2022
Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, Zhihui Zhu

Figure 1 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 2 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 3 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Figure 4 for On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features
Viaarxiv icon