Alert button
Picture for Ruihao Gong

Ruihao Gong

Alert button

2023 Low-Power Computer Vision Challenge (LPCVC) Summary

Add code
Bookmark button
Alert button
Mar 11, 2024
Leo Chen, Benjamin Boardley, Ping Hu, Yiru Wang, Yifan Pu, Xin Jin, Yongqiang Yao, Ruihao Gong, Bo Li, Gao Huang, Xianglong Liu, Zifu Wan, Xinwang Chen, Ning Liu, Ziyi Zhang, Dongping Liu, Ruijie Shan, Zhengping Che, Fachao Zhang, Xiaofeng Mou, Jian Tang, Maxim Chuprov, Ivan Malofeev, Alexander Goncharenko, Andrey Shcherbin, Arseny Yanchenko, Sergey Alyamkin, Xiao Hu, George K. Thiruvathukal, Yung Hsiang Lu

Figure 1 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 2 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 3 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Figure 4 for 2023 Low-Power Computer Vision Challenge (LPCVC) Summary
Viaarxiv icon

ProPD: Dynamic Token Tree Pruning and Generation for LLM Parallel Decoding

Add code
Bookmark button
Alert button
Feb 21, 2024
Shuzhang Zhong, Zebin Yang, Meng Li, Ruihao Gong, Runsheng Wang, Ru Huang

Viaarxiv icon

TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models

Add code
Bookmark button
Alert button
Nov 27, 2023
Yushi Huang, Ruihao Gong, Jing Liu, Tianlong Chen, Xianglong Liu

Figure 1 for TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
Figure 2 for TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
Figure 3 for TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
Figure 4 for TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
Viaarxiv icon

QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models

Add code
Bookmark button
Alert button
Oct 12, 2023
Jing Liu, Ruihao Gong, Xiuying Wei, Zhiwei Dong, Jianfei Cai, Bohan Zhuang

Figure 1 for QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
Figure 2 for QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
Figure 3 for QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
Figure 4 for QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Models
Viaarxiv icon

Lossy and Lossless (L$^2$) Post-training Model Size Compression

Add code
Bookmark button
Alert button
Aug 08, 2023
Yumeng Shi, Shihao Bai, Xiuying Wei, Ruihao Gong, Jianlei Yang

Figure 1 for Lossy and Lossless (L$^2$) Post-training Model Size Compression
Figure 2 for Lossy and Lossless (L$^2$) Post-training Model Size Compression
Figure 3 for Lossy and Lossless (L$^2$) Post-training Model Size Compression
Figure 4 for Lossy and Lossless (L$^2$) Post-training Model Size Compression
Viaarxiv icon

SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency

Add code
Bookmark button
Alert button
Jul 01, 2023
Yan Wang, Yuhang Li, Ruihao Gong, Aishan Liu, Yanfei Wang, Jian Hu, Yongqiang Yao, Yunchen Zhang, Tianzi Xiao, Fengwei Yu, Xianglong Liu

Figure 1 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 2 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 3 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Figure 4 for SysNoise: Exploring and Benchmarking Training-Deployment System Inconsistency
Viaarxiv icon

Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling

Add code
Bookmark button
Alert button
Apr 18, 2023
Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, Xianglong Liu

Figure 1 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 2 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 3 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Figure 4 for Outlier Suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling
Viaarxiv icon

Exploring the Relationship between Architecture and Adversarially Robust Generalization

Add code
Bookmark button
Alert button
Sep 28, 2022
Shiyu Tang, Siyuan Liang, Ruihao Gong, Aishan Liu, Xianglong Liu, Dacheng Tao

Figure 1 for Exploring the Relationship between Architecture and Adversarially Robust Generalization
Figure 2 for Exploring the Relationship between Architecture and Adversarially Robust Generalization
Figure 3 for Exploring the Relationship between Architecture and Adversarially Robust Generalization
Figure 4 for Exploring the Relationship between Architecture and Adversarially Robust Generalization
Viaarxiv icon

Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models

Add code
Bookmark button
Alert button
Sep 27, 2022
Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Qi Zhang, Fengwei Yu, Xianglong Liu

Figure 1 for Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
Figure 2 for Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
Figure 3 for Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
Figure 4 for Outlier Suppression: Pushing the Limit of Low-bit Transformer Language Models
Viaarxiv icon

QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization

Add code
Bookmark button
Alert button
Mar 11, 2022
Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu

Figure 1 for QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
Figure 2 for QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
Figure 3 for QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
Figure 4 for QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
Viaarxiv icon