Alert button
Picture for Sijia Liu

Sijia Liu

Alert button

Robust Mixture-of-Expert Training for Convolutional Neural Networks

Add code
Bookmark button
Alert button
Aug 19, 2023
Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang Wang, Sijia Liu

Figure 1 for Robust Mixture-of-Expert Training for Convolutional Neural Networks
Figure 2 for Robust Mixture-of-Expert Training for Convolutional Neural Networks
Figure 3 for Robust Mixture-of-Expert Training for Convolutional Neural Networks
Figure 4 for Robust Mixture-of-Expert Training for Convolutional Neural Networks
Viaarxiv icon

Tensor-Compressed Back-Propagation-Free Training for (Physics-Informed) Neural Networks

Add code
Bookmark button
Alert button
Aug 18, 2023
Yequan Zhao, Xinling Yu, Zhixiong Chen, Ziyue Liu, Sijia Liu, Zheng Zhang

Viaarxiv icon

AutoSeqRec: Autoencoder for Efficient Sequential Recommendation

Add code
Bookmark button
Alert button
Aug 14, 2023
Sijia Liu, Jiahao Liu, Hansu Gu, Dongsheng Li, Tun Lu, Peng Zhang, Ning Gu

Figure 1 for AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
Figure 2 for AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
Figure 3 for AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
Figure 4 for AutoSeqRec: Autoencoder for Efficient Sequential Recommendation
Viaarxiv icon

An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning

Add code
Bookmark button
Alert button
Aug 03, 2023
Yihua Zhang, Prashant Khanduri, Ioannis Tsaknakis, Yuguang Yao, Mingyi Hong, Sijia Liu

Viaarxiv icon

Certified Robustness for Large Language Models with Self-Denoising

Add code
Bookmark button
Alert button
Jul 14, 2023
Zhen Zhang, Guanhua Zhang, Bairu Hou, Wenqi Fan, Qing Li, Sijia Liu, Yang Zhang, Shiyu Chang

Figure 1 for Certified Robustness for Large Language Models with Self-Denoising
Figure 2 for Certified Robustness for Large Language Models with Self-Denoising
Figure 3 for Certified Robustness for Large Language Models with Self-Denoising
Figure 4 for Certified Robustness for Large Language Models with Self-Denoising
Viaarxiv icon

Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations

Add code
Bookmark button
Alert button
Jul 08, 2023
Tong Steven Sun, Yuyang Gao, Shubham Khaladkar, Sijia Liu, Liang Zhao, Young-Ho Kim, Sungsoo Ray Hong

Figure 1 for Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Figure 2 for Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Figure 3 for Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Figure 4 for Designing a Direct Feedback Loop between Humans and Convolutional Neural Networks through Local Explanations
Viaarxiv icon

Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks

Add code
Bookmark button
Alert button
Jun 07, 2023
Mohammed Nowaz Rabbani Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen

Figure 1 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 2 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 3 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Figure 4 for Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks
Viaarxiv icon

Model Sparsification Can Simplify Machine Unlearning

Add code
Bookmark button
Alert button
Apr 14, 2023
Jinghan Jia, Jiancheng Liu, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu

Figure 1 for Model Sparsification Can Simplify Machine Unlearning
Figure 2 for Model Sparsification Can Simplify Machine Unlearning
Figure 3 for Model Sparsification Can Simplify Machine Unlearning
Figure 4 for Model Sparsification Can Simplify Machine Unlearning
Viaarxiv icon